Test Report: KVM_Linux_crio 20033

                    
                      ff5f503981c4fd2196f1d2b6598014c1f7aaa64b:2024-12-02:37311
                    
                

Test fail (32/320)

Order failed test Duration
36 TestAddons/parallel/Ingress 150.96
38 TestAddons/parallel/MetricsServer 328.33
47 TestAddons/StoppedEnableDisable 154.19
125 TestFunctional/parallel/ImageCommands/ImageBuild 12.27
166 TestMultiControlPlane/serial/StopSecondaryNode 141.39
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.55
168 TestMultiControlPlane/serial/RestartSecondaryNode 6.36
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.53
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 407.41
173 TestMultiControlPlane/serial/StopCluster 141.89
233 TestMultiNode/serial/RestartKeepsNodes 323.69
235 TestMultiNode/serial/StopMultiNode 145.03
242 TestPreload 164.37
250 TestKubernetesUpgrade 399.42
293 TestStartStop/group/old-k8s-version/serial/FirstStart 270.6
298 TestStartStop/group/no-preload/serial/Stop 138.96
305 TestStartStop/group/embed-certs/serial/Stop 139.08
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
319 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
320 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 103.29
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
325 TestStartStop/group/old-k8s-version/serial/SecondStart 704.8
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.03
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.6
332 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.54
333 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.82
334 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.61
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 501.94
336 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 374
337 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 174.4
348 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 155.66
x
+
TestAddons/parallel/Ingress (150.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-093588 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-093588 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-093588 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9cf016d6-ed93-4bb5-94f4-88b82ea95ba5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9cf016d6-ed93-4bb5-94f4-88b82ea95ba5] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003858989s
I1202 11:34:05.895279   13416 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
2024/12/02 11:34:05 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
I1202 11:34:05.900016   13416 retry.go:31] will retry after 856.565136ms: GET http://192.168.39.203:5000 giving up after 5 attempt(s): Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:34:06 [DEBUG] GET http://192.168.39.203:5000
2024/12/02 11:34:06 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:34:06 [DEBUG] GET http://192.168.39.203:5000: retrying in 1s (4 left)
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-093588 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.044011418s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-093588 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.203
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-093588 -n addons-093588
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-093588 logs -n 25: (1.421739284s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| delete  | -p download-only-257770                                                                     | download-only-257770 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| delete  | -p download-only-407914                                                                     | download-only-407914 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| delete  | -p download-only-257770                                                                     | download-only-257770 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-408241 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | binary-mirror-408241                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43999                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-408241                                                                     | binary-mirror-408241 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| addons  | disable dashboard -p                                                                        | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | addons-093588                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | addons-093588                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-093588 --wait=true                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:32 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:32 UTC | 02 Dec 24 11:32 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:32 UTC | 02 Dec 24 11:33 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | -p addons-093588                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-093588 addons                                                                        | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-093588 ssh cat                                                                       | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | /opt/local-path-provisioner/pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-093588 ip                                                                            | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	| addons  | addons-093588 addons                                                                        | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-093588 addons                                                                        | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-093588 ssh curl -s                                                                   | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-093588 addons                                                                        | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-093588 addons                                                                        | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-093588 ip                                                                            | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:36 UTC | 02 Dec 24 11:36 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:30:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:30:37.455381   14046 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:30:37.455480   14046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:37.455489   14046 out.go:358] Setting ErrFile to fd 2...
	I1202 11:30:37.455493   14046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:37.455668   14046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:30:37.456323   14046 out.go:352] Setting JSON to false
	I1202 11:30:37.457128   14046 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":789,"bootTime":1733138248,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:30:37.457182   14046 start.go:139] virtualization: kvm guest
	I1202 11:30:37.459050   14046 out.go:177] * [addons-093588] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:30:37.460220   14046 notify.go:220] Checking for updates...
	I1202 11:30:37.460254   14046 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:30:37.461315   14046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:30:37.462351   14046 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:30:37.463400   14046 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:30:37.464380   14046 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:30:37.465325   14046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:30:37.466424   14046 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:30:37.495915   14046 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 11:30:37.497029   14046 start.go:297] selected driver: kvm2
	I1202 11:30:37.497047   14046 start.go:901] validating driver "kvm2" against <nil>
	I1202 11:30:37.497060   14046 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:30:37.497712   14046 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:30:37.497776   14046 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 11:30:37.512199   14046 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 11:30:37.512258   14046 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 11:30:37.512498   14046 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:30:37.512526   14046 cni.go:84] Creating CNI manager for ""
	I1202 11:30:37.512569   14046 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 11:30:37.512581   14046 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1202 11:30:37.512629   14046 start.go:340] cluster config:
	{Name:addons-093588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:30:37.512716   14046 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:30:37.515047   14046 out.go:177] * Starting "addons-093588" primary control-plane node in "addons-093588" cluster
	I1202 11:30:37.516087   14046 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:30:37.516117   14046 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:30:37.516127   14046 cache.go:56] Caching tarball of preloaded images
	I1202 11:30:37.516196   14046 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:30:37.516208   14046 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:30:37.516518   14046 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/config.json ...
	I1202 11:30:37.516542   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/config.json: {Name:mk15de776ac6faf6fd8a23110b6fb90c273126c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:30:37.516686   14046 start.go:360] acquireMachinesLock for addons-093588: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:30:37.516736   14046 start.go:364] duration metric: took 35.877µs to acquireMachinesLock for "addons-093588"
	I1202 11:30:37.516755   14046 start.go:93] Provisioning new machine with config: &{Name:addons-093588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:30:37.516809   14046 start.go:125] createHost starting for "" (driver="kvm2")
	I1202 11:30:37.518955   14046 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1202 11:30:37.519064   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:30:37.519111   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:30:37.532176   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I1202 11:30:37.532631   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:30:37.533117   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:30:37.533134   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:30:37.533432   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:30:37.533598   14046 main.go:141] libmachine: (addons-093588) Calling .GetMachineName
	I1202 11:30:37.533741   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:30:37.533872   14046 start.go:159] libmachine.API.Create for "addons-093588" (driver="kvm2")
	I1202 11:30:37.533900   14046 client.go:168] LocalClient.Create starting
	I1202 11:30:37.533936   14046 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:30:37.890362   14046 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:30:38.028981   14046 main.go:141] libmachine: Running pre-create checks...
	I1202 11:30:38.029000   14046 main.go:141] libmachine: (addons-093588) Calling .PreCreateCheck
	I1202 11:30:38.029460   14046 main.go:141] libmachine: (addons-093588) Calling .GetConfigRaw
	I1202 11:30:38.029866   14046 main.go:141] libmachine: Creating machine...
	I1202 11:30:38.029880   14046 main.go:141] libmachine: (addons-093588) Calling .Create
	I1202 11:30:38.030036   14046 main.go:141] libmachine: (addons-093588) Creating KVM machine...
	I1202 11:30:38.031150   14046 main.go:141] libmachine: (addons-093588) DBG | found existing default KVM network
	I1202 11:30:38.031811   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.031684   14068 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002011f0}
	I1202 11:30:38.031852   14046 main.go:141] libmachine: (addons-093588) DBG | created network xml: 
	I1202 11:30:38.031872   14046 main.go:141] libmachine: (addons-093588) DBG | <network>
	I1202 11:30:38.031885   14046 main.go:141] libmachine: (addons-093588) DBG |   <name>mk-addons-093588</name>
	I1202 11:30:38.031900   14046 main.go:141] libmachine: (addons-093588) DBG |   <dns enable='no'/>
	I1202 11:30:38.031929   14046 main.go:141] libmachine: (addons-093588) DBG |   
	I1202 11:30:38.031958   14046 main.go:141] libmachine: (addons-093588) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1202 11:30:38.031973   14046 main.go:141] libmachine: (addons-093588) DBG |     <dhcp>
	I1202 11:30:38.031985   14046 main.go:141] libmachine: (addons-093588) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1202 11:30:38.031991   14046 main.go:141] libmachine: (addons-093588) DBG |     </dhcp>
	I1202 11:30:38.031998   14046 main.go:141] libmachine: (addons-093588) DBG |   </ip>
	I1202 11:30:38.032003   14046 main.go:141] libmachine: (addons-093588) DBG |   
	I1202 11:30:38.032010   14046 main.go:141] libmachine: (addons-093588) DBG | </network>
	I1202 11:30:38.032020   14046 main.go:141] libmachine: (addons-093588) DBG | 
	I1202 11:30:38.037024   14046 main.go:141] libmachine: (addons-093588) DBG | trying to create private KVM network mk-addons-093588 192.168.39.0/24...
	I1202 11:30:38.095436   14046 main.go:141] libmachine: (addons-093588) DBG | private KVM network mk-addons-093588 192.168.39.0/24 created
	I1202 11:30:38.095476   14046 main.go:141] libmachine: (addons-093588) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588 ...
	I1202 11:30:38.095496   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.095389   14068 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:30:38.095510   14046 main.go:141] libmachine: (addons-093588) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:30:38.095536   14046 main.go:141] libmachine: (addons-093588) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:30:38.351649   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.351512   14068 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa...
	I1202 11:30:38.416171   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.416080   14068 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/addons-093588.rawdisk...
	I1202 11:30:38.416198   14046 main.go:141] libmachine: (addons-093588) DBG | Writing magic tar header
	I1202 11:30:38.416275   14046 main.go:141] libmachine: (addons-093588) DBG | Writing SSH key tar header
	I1202 11:30:38.416312   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.416182   14068 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588 ...
	I1202 11:30:38.416332   14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588 (perms=drwx------)
	I1202 11:30:38.416347   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588
	I1202 11:30:38.416361   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:30:38.416368   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:30:38.416379   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:30:38.416384   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:30:38.416391   14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:30:38.416403   14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:30:38.416414   14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:30:38.416422   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:30:38.416433   14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:30:38.416445   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home
	I1202 11:30:38.416458   14046 main.go:141] libmachine: (addons-093588) DBG | Skipping /home - not owner
	I1202 11:30:38.416469   14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:30:38.416477   14046 main.go:141] libmachine: (addons-093588) Creating domain...
	I1202 11:30:38.417348   14046 main.go:141] libmachine: (addons-093588) define libvirt domain using xml: 
	I1202 11:30:38.417385   14046 main.go:141] libmachine: (addons-093588) <domain type='kvm'>
	I1202 11:30:38.417398   14046 main.go:141] libmachine: (addons-093588)   <name>addons-093588</name>
	I1202 11:30:38.417413   14046 main.go:141] libmachine: (addons-093588)   <memory unit='MiB'>4000</memory>
	I1202 11:30:38.417424   14046 main.go:141] libmachine: (addons-093588)   <vcpu>2</vcpu>
	I1202 11:30:38.417430   14046 main.go:141] libmachine: (addons-093588)   <features>
	I1202 11:30:38.417442   14046 main.go:141] libmachine: (addons-093588)     <acpi/>
	I1202 11:30:38.417452   14046 main.go:141] libmachine: (addons-093588)     <apic/>
	I1202 11:30:38.417460   14046 main.go:141] libmachine: (addons-093588)     <pae/>
	I1202 11:30:38.417469   14046 main.go:141] libmachine: (addons-093588)     
	I1202 11:30:38.417482   14046 main.go:141] libmachine: (addons-093588)   </features>
	I1202 11:30:38.417492   14046 main.go:141] libmachine: (addons-093588)   <cpu mode='host-passthrough'>
	I1202 11:30:38.417497   14046 main.go:141] libmachine: (addons-093588)   
	I1202 11:30:38.417520   14046 main.go:141] libmachine: (addons-093588)   </cpu>
	I1202 11:30:38.417539   14046 main.go:141] libmachine: (addons-093588)   <os>
	I1202 11:30:38.417549   14046 main.go:141] libmachine: (addons-093588)     <type>hvm</type>
	I1202 11:30:38.417564   14046 main.go:141] libmachine: (addons-093588)     <boot dev='cdrom'/>
	I1202 11:30:38.417575   14046 main.go:141] libmachine: (addons-093588)     <boot dev='hd'/>
	I1202 11:30:38.417584   14046 main.go:141] libmachine: (addons-093588)     <bootmenu enable='no'/>
	I1202 11:30:38.417595   14046 main.go:141] libmachine: (addons-093588)   </os>
	I1202 11:30:38.417604   14046 main.go:141] libmachine: (addons-093588)   <devices>
	I1202 11:30:38.417614   14046 main.go:141] libmachine: (addons-093588)     <disk type='file' device='cdrom'>
	I1202 11:30:38.417628   14046 main.go:141] libmachine: (addons-093588)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/boot2docker.iso'/>
	I1202 11:30:38.417650   14046 main.go:141] libmachine: (addons-093588)       <target dev='hdc' bus='scsi'/>
	I1202 11:30:38.417668   14046 main.go:141] libmachine: (addons-093588)       <readonly/>
	I1202 11:30:38.417681   14046 main.go:141] libmachine: (addons-093588)     </disk>
	I1202 11:30:38.417693   14046 main.go:141] libmachine: (addons-093588)     <disk type='file' device='disk'>
	I1202 11:30:38.417706   14046 main.go:141] libmachine: (addons-093588)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:30:38.417720   14046 main.go:141] libmachine: (addons-093588)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/addons-093588.rawdisk'/>
	I1202 11:30:38.417732   14046 main.go:141] libmachine: (addons-093588)       <target dev='hda' bus='virtio'/>
	I1202 11:30:38.417743   14046 main.go:141] libmachine: (addons-093588)     </disk>
	I1202 11:30:38.417756   14046 main.go:141] libmachine: (addons-093588)     <interface type='network'>
	I1202 11:30:38.417768   14046 main.go:141] libmachine: (addons-093588)       <source network='mk-addons-093588'/>
	I1202 11:30:38.417780   14046 main.go:141] libmachine: (addons-093588)       <model type='virtio'/>
	I1202 11:30:38.417789   14046 main.go:141] libmachine: (addons-093588)     </interface>
	I1202 11:30:38.417800   14046 main.go:141] libmachine: (addons-093588)     <interface type='network'>
	I1202 11:30:38.417815   14046 main.go:141] libmachine: (addons-093588)       <source network='default'/>
	I1202 11:30:38.417824   14046 main.go:141] libmachine: (addons-093588)       <model type='virtio'/>
	I1202 11:30:38.417832   14046 main.go:141] libmachine: (addons-093588)     </interface>
	I1202 11:30:38.417847   14046 main.go:141] libmachine: (addons-093588)     <serial type='pty'>
	I1202 11:30:38.417858   14046 main.go:141] libmachine: (addons-093588)       <target port='0'/>
	I1202 11:30:38.417868   14046 main.go:141] libmachine: (addons-093588)     </serial>
	I1202 11:30:38.417882   14046 main.go:141] libmachine: (addons-093588)     <console type='pty'>
	I1202 11:30:38.417900   14046 main.go:141] libmachine: (addons-093588)       <target type='serial' port='0'/>
	I1202 11:30:38.417909   14046 main.go:141] libmachine: (addons-093588)     </console>
	I1202 11:30:38.417919   14046 main.go:141] libmachine: (addons-093588)     <rng model='virtio'>
	I1202 11:30:38.417930   14046 main.go:141] libmachine: (addons-093588)       <backend model='random'>/dev/random</backend>
	I1202 11:30:38.417942   14046 main.go:141] libmachine: (addons-093588)     </rng>
	I1202 11:30:38.417955   14046 main.go:141] libmachine: (addons-093588)     
	I1202 11:30:38.417965   14046 main.go:141] libmachine: (addons-093588)     
	I1202 11:30:38.417974   14046 main.go:141] libmachine: (addons-093588)   </devices>
	I1202 11:30:38.417982   14046 main.go:141] libmachine: (addons-093588) </domain>
	I1202 11:30:38.417991   14046 main.go:141] libmachine: (addons-093588) 
	I1202 11:30:38.423153   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:41:86:b0 in network default
	I1202 11:30:38.423632   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:38.423650   14046 main.go:141] libmachine: (addons-093588) Ensuring networks are active...
	I1202 11:30:38.424163   14046 main.go:141] libmachine: (addons-093588) Ensuring network default is active
	I1202 11:30:38.424413   14046 main.go:141] libmachine: (addons-093588) Ensuring network mk-addons-093588 is active
	I1202 11:30:38.424831   14046 main.go:141] libmachine: (addons-093588) Getting domain xml...
	I1202 11:30:38.425386   14046 main.go:141] libmachine: (addons-093588) Creating domain...
	I1202 11:30:39.768153   14046 main.go:141] libmachine: (addons-093588) Waiting to get IP...
	I1202 11:30:39.769048   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:39.769406   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:39.769434   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:39.769391   14068 retry.go:31] will retry after 262.465444ms: waiting for machine to come up
	I1202 11:30:40.033678   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:40.034019   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:40.034047   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:40.033987   14068 retry.go:31] will retry after 268.465291ms: waiting for machine to come up
	I1202 11:30:40.304474   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:40.304856   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:40.304886   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:40.304845   14068 retry.go:31] will retry after 459.329717ms: waiting for machine to come up
	I1202 11:30:40.765148   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:40.765539   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:40.765576   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:40.765500   14068 retry.go:31] will retry after 473.589572ms: waiting for machine to come up
	I1202 11:30:41.241029   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:41.241356   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:41.241402   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:41.241309   14068 retry.go:31] will retry after 489.24768ms: waiting for machine to come up
	I1202 11:30:41.732001   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:41.732402   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:41.732428   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:41.732337   14068 retry.go:31] will retry after 764.713135ms: waiting for machine to come up
	I1202 11:30:42.498043   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:42.498440   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:42.498462   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:42.498418   14068 retry.go:31] will retry after 1.105216684s: waiting for machine to come up
	I1202 11:30:43.605335   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:43.605759   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:43.605784   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:43.605714   14068 retry.go:31] will retry after 1.334125941s: waiting for machine to come up
	I1202 11:30:44.942153   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:44.942579   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:44.942604   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:44.942535   14068 retry.go:31] will retry after 1.384283544s: waiting for machine to come up
	I1202 11:30:46.329052   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:46.329455   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:46.329485   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:46.329405   14068 retry.go:31] will retry after 1.997806074s: waiting for machine to come up
	I1202 11:30:48.328389   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:48.328833   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:48.328861   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:48.328789   14068 retry.go:31] will retry after 2.344508632s: waiting for machine to come up
	I1202 11:30:50.676551   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:50.676981   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:50.677010   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:50.676934   14068 retry.go:31] will retry after 3.069367748s: waiting for machine to come up
	I1202 11:30:53.748570   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:53.748926   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:53.748950   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:53.748888   14068 retry.go:31] will retry after 2.996899134s: waiting for machine to come up
	I1202 11:30:56.749121   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:56.749572   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:56.749597   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:56.749520   14068 retry.go:31] will retry after 4.228069851s: waiting for machine to come up
	I1202 11:31:00.981506   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:00.981936   14046 main.go:141] libmachine: (addons-093588) Found IP for machine: 192.168.39.203
	I1202 11:31:00.981958   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has current primary IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:00.981964   14046 main.go:141] libmachine: (addons-093588) Reserving static IP address...
	I1202 11:31:00.982295   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find host DHCP lease matching {name: "addons-093588", mac: "52:54:00:8a:ff:d0", ip: "192.168.39.203"} in network mk-addons-093588
	I1202 11:31:01.048415   14046 main.go:141] libmachine: (addons-093588) DBG | Getting to WaitForSSH function...
	I1202 11:31:01.048442   14046 main.go:141] libmachine: (addons-093588) Reserved static IP address: 192.168.39.203
	I1202 11:31:01.048454   14046 main.go:141] libmachine: (addons-093588) Waiting for SSH to be available...
	I1202 11:31:01.051059   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.051438   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.051472   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.051619   14046 main.go:141] libmachine: (addons-093588) DBG | Using SSH client type: external
	I1202 11:31:01.051638   14046 main.go:141] libmachine: (addons-093588) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa (-rw-------)
	I1202 11:31:01.051663   14046 main.go:141] libmachine: (addons-093588) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:31:01.051673   14046 main.go:141] libmachine: (addons-093588) DBG | About to run SSH command:
	I1202 11:31:01.051681   14046 main.go:141] libmachine: (addons-093588) DBG | exit 0
	I1202 11:31:01.179840   14046 main.go:141] libmachine: (addons-093588) DBG | SSH cmd err, output: <nil>: 
	I1202 11:31:01.180069   14046 main.go:141] libmachine: (addons-093588) KVM machine creation complete!
	I1202 11:31:01.180372   14046 main.go:141] libmachine: (addons-093588) Calling .GetConfigRaw
	I1202 11:31:01.181030   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:01.181223   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:01.181368   14046 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:31:01.181383   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:01.182471   14046 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:31:01.182489   14046 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:31:01.182497   14046 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:31:01.182504   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:01.184526   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.184815   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.184837   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.184948   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:01.185116   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.185254   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.185411   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:01.185565   14046 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:01.185779   14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1202 11:31:01.185793   14046 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:31:01.282941   14046 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:31:01.282962   14046 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:31:01.282973   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:01.285619   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.285952   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.285985   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.286145   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:01.286305   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.286461   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.286576   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:01.286731   14046 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:01.286920   14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1202 11:31:01.286931   14046 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:31:01.388462   14046 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:31:01.388501   14046 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:31:01.388507   14046 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:31:01.388512   14046 main.go:141] libmachine: (addons-093588) Calling .GetMachineName
	I1202 11:31:01.388677   14046 buildroot.go:166] provisioning hostname "addons-093588"
	I1202 11:31:01.388696   14046 main.go:141] libmachine: (addons-093588) Calling .GetMachineName
	I1202 11:31:01.388841   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:01.391137   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.391506   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.391534   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.391652   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:01.391816   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.391965   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.392102   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:01.392246   14046 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:01.392391   14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1202 11:31:01.392402   14046 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-093588 && echo "addons-093588" | sudo tee /etc/hostname
	I1202 11:31:01.506202   14046 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093588
	
	I1202 11:31:01.506240   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:01.509060   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.509411   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.509432   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.509608   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:01.509804   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.509958   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.510079   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:01.510222   14046 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:01.510393   14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1202 11:31:01.510415   14046 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-093588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-093588/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-093588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:31:01.616311   14046 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:31:01.616347   14046 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:31:01.616393   14046 buildroot.go:174] setting up certificates
	I1202 11:31:01.616410   14046 provision.go:84] configureAuth start
	I1202 11:31:01.616430   14046 main.go:141] libmachine: (addons-093588) Calling .GetMachineName
	I1202 11:31:01.616682   14046 main.go:141] libmachine: (addons-093588) Calling .GetIP
	I1202 11:31:01.619505   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.620156   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.620182   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.620327   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:01.622275   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.622543   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.622570   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.622692   14046 provision.go:143] copyHostCerts
	I1202 11:31:01.622767   14046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:31:01.622899   14046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:31:01.622955   14046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:31:01.623001   14046 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.addons-093588 san=[127.0.0.1 192.168.39.203 addons-093588 localhost minikube]
	I1202 11:31:01.923775   14046 provision.go:177] copyRemoteCerts
	I1202 11:31:01.923832   14046 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:31:01.923854   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:01.926193   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.926521   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.926551   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.926687   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:01.926841   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.926972   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:01.927075   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:02.005579   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:31:02.029137   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:31:02.051665   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 11:31:02.074035   14046 provision.go:87] duration metric: took 457.609565ms to configureAuth
	I1202 11:31:02.074059   14046 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:31:02.074217   14046 config.go:182] Loaded profile config "addons-093588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:31:02.074283   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:02.076631   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.076987   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.077013   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.077164   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:02.077336   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.077492   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.077615   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:02.077760   14046 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:02.077906   14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1202 11:31:02.077920   14046 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:31:02.287644   14046 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:31:02.287666   14046 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:31:02.287672   14046 main.go:141] libmachine: (addons-093588) Calling .GetURL
	I1202 11:31:02.288858   14046 main.go:141] libmachine: (addons-093588) DBG | Using libvirt version 6000000
	I1202 11:31:02.290750   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.291050   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.291080   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.291195   14046 main.go:141] libmachine: Docker is up and running!
	I1202 11:31:02.291216   14046 main.go:141] libmachine: Reticulating splines...
	I1202 11:31:02.291222   14046 client.go:171] duration metric: took 24.757312526s to LocalClient.Create
	I1202 11:31:02.291244   14046 start.go:167] duration metric: took 24.757374154s to libmachine.API.Create "addons-093588"
	I1202 11:31:02.291261   14046 start.go:293] postStartSetup for "addons-093588" (driver="kvm2")
	I1202 11:31:02.291272   14046 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:31:02.291288   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:02.291502   14046 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:31:02.291522   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:02.293349   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.293594   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.293619   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.293743   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:02.293886   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.294032   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:02.294145   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:02.373911   14046 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:31:02.378111   14046 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:31:02.378132   14046 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:31:02.378192   14046 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:31:02.378214   14046 start.go:296] duration metric: took 86.945972ms for postStartSetup
	I1202 11:31:02.378245   14046 main.go:141] libmachine: (addons-093588) Calling .GetConfigRaw
	I1202 11:31:02.378753   14046 main.go:141] libmachine: (addons-093588) Calling .GetIP
	I1202 11:31:02.380981   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.381316   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.381361   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.381564   14046 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/config.json ...
	I1202 11:31:02.381722   14046 start.go:128] duration metric: took 24.864904519s to createHost
	I1202 11:31:02.381743   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:02.383934   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.384272   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.384314   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.384473   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:02.384686   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.384826   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.384934   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:02.385083   14046 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:02.385236   14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1202 11:31:02.385245   14046 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:31:02.488569   14046 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733139062.461766092
	
	I1202 11:31:02.488586   14046 fix.go:216] guest clock: 1733139062.461766092
	I1202 11:31:02.488594   14046 fix.go:229] Guest: 2024-12-02 11:31:02.461766092 +0000 UTC Remote: 2024-12-02 11:31:02.381733026 +0000 UTC m=+24.960080527 (delta=80.033066ms)
	I1202 11:31:02.488611   14046 fix.go:200] guest clock delta is within tolerance: 80.033066ms
	I1202 11:31:02.488616   14046 start.go:83] releasing machines lock for "addons-093588", held for 24.971869861s
	I1202 11:31:02.488633   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:02.488804   14046 main.go:141] libmachine: (addons-093588) Calling .GetIP
	I1202 11:31:02.491410   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.491718   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.491740   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.491912   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:02.492303   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:02.492498   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:02.492599   14046 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:31:02.492640   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:02.492659   14046 ssh_runner.go:195] Run: cat /version.json
	I1202 11:31:02.492682   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:02.495098   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.495504   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.495523   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.495555   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.495683   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:02.495836   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.495981   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:02.496036   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.496054   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.496127   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:02.496309   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:02.496461   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.496596   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:02.496733   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:02.593747   14046 ssh_runner.go:195] Run: systemctl --version
	I1202 11:31:02.599449   14046 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:31:02.754591   14046 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:31:02.760318   14046 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:31:02.760381   14046 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:31:02.775654   14046 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:31:02.775672   14046 start.go:495] detecting cgroup driver to use...
	I1202 11:31:02.775730   14046 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:31:02.790974   14046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:31:02.803600   14046 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:31:02.803656   14046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:31:02.816048   14046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:31:02.828952   14046 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:31:02.939245   14046 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:31:03.103186   14046 docker.go:233] disabling docker service ...
	I1202 11:31:03.103247   14046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:31:03.117174   14046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:31:03.129365   14046 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:31:03.241601   14046 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:31:03.354550   14046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:31:03.368814   14046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:31:03.387288   14046 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:31:03.387336   14046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.397743   14046 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:31:03.397802   14046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.408206   14046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.418070   14046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.428088   14046 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:31:03.438226   14046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.448028   14046 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.464548   14046 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.474482   14046 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:31:03.483342   14046 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:31:03.483384   14046 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:31:03.495424   14046 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:31:03.504365   14046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:31:03.616131   14046 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:31:03.806820   14046 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:31:03.806906   14046 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:31:03.811970   14046 start.go:563] Will wait 60s for crictl version
	I1202 11:31:03.812015   14046 ssh_runner.go:195] Run: which crictl
	I1202 11:31:03.815656   14046 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:31:03.854668   14046 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:31:03.854771   14046 ssh_runner.go:195] Run: crio --version
	I1202 11:31:03.883503   14046 ssh_runner.go:195] Run: crio --version
	I1202 11:31:03.943735   14046 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:31:03.978507   14046 main.go:141] libmachine: (addons-093588) Calling .GetIP
	I1202 11:31:03.981079   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:03.981440   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:03.981469   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:03.981694   14046 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:31:03.986029   14046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:31:03.999160   14046 kubeadm.go:883] updating cluster {Name:addons-093588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 11:31:03.999273   14046 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:31:03.999318   14046 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:31:04.032753   14046 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 11:31:04.032848   14046 ssh_runner.go:195] Run: which lz4
	I1202 11:31:04.036732   14046 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 11:31:04.040941   14046 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 11:31:04.040969   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 11:31:05.312874   14046 crio.go:462] duration metric: took 1.276172912s to copy over tarball
	I1202 11:31:05.312957   14046 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 11:31:07.438469   14046 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.125483138s)
	I1202 11:31:07.438502   14046 crio.go:469] duration metric: took 2.125592032s to extract the tarball
	I1202 11:31:07.438513   14046 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 11:31:07.475913   14046 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:31:07.526664   14046 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:31:07.526685   14046 cache_images.go:84] Images are preloaded, skipping loading
	I1202 11:31:07.526695   14046 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.2 crio true true} ...
	I1202 11:31:07.526796   14046 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-093588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:31:07.526870   14046 ssh_runner.go:195] Run: crio config
	I1202 11:31:07.582564   14046 cni.go:84] Creating CNI manager for ""
	I1202 11:31:07.582584   14046 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 11:31:07.582593   14046 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 11:31:07.582614   14046 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-093588 NodeName:addons-093588 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 11:31:07.582727   14046 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-093588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.203"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 11:31:07.582780   14046 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:31:07.592378   14046 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:31:07.592421   14046 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 11:31:07.601397   14046 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1202 11:31:07.617029   14046 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:31:07.632123   14046 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1202 11:31:07.647544   14046 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I1202 11:31:07.651140   14046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:31:07.662518   14046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:31:07.774786   14046 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:31:07.795670   14046 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588 for IP: 192.168.39.203
	I1202 11:31:07.795689   14046 certs.go:194] generating shared ca certs ...
	I1202 11:31:07.795704   14046 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:07.795860   14046 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:31:07.881230   14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt ...
	I1202 11:31:07.881255   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt: {Name:mkb25dcf874cc76262dd87f7954dc5def047ba80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:07.881433   14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key ...
	I1202 11:31:07.881447   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key: {Name:mk24aaecfce06715328a2e1bdf78912e66e577e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:07.881546   14046 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:31:08.066592   14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt ...
	I1202 11:31:08.066617   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt: {Name:mk353521566f5b511b2c49b5facbb9d7e8a55579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.066785   14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key ...
	I1202 11:31:08.066799   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key: {Name:mk0fae51faecacd368a9e9845e8ec1cc10ac1c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.066891   14046 certs.go:256] generating profile certs ...
	I1202 11:31:08.066943   14046 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.key
	I1202 11:31:08.066963   14046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt with IP's: []
	I1202 11:31:08.199504   14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt ...
	I1202 11:31:08.199534   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: {Name:mke09ad3d888dc6da1ff7604f62658a689c18924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.199693   14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.key ...
	I1202 11:31:08.199704   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.key: {Name:mk29de8a87eafaedfa0731583b4b03810c89d586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.199771   14046 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key.522ccd78
	I1202 11:31:08.199789   14046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt.522ccd78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203]
	I1202 11:31:08.366826   14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt.522ccd78 ...
	I1202 11:31:08.366857   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt.522ccd78: {Name:mkccb5564ec2f6a186fbab8f5cb67d658caada7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.367032   14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key.522ccd78 ...
	I1202 11:31:08.367046   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key.522ccd78: {Name:mk589a09a46c7953a1cc24cad0c706bf9dfb6e43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.367125   14046 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt.522ccd78 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt
	I1202 11:31:08.367205   14046 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key.522ccd78 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key
	I1202 11:31:08.367257   14046 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.key
	I1202 11:31:08.367277   14046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.crt with IP's: []
	I1202 11:31:08.450648   14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.crt ...
	I1202 11:31:08.450679   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.crt: {Name:mke55f3a980df3599f606cdcab7f35740d5da41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.450843   14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.key ...
	I1202 11:31:08.450854   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.key: {Name:mk944302d6559b5e702f266fc95edf52b4fa7b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.451514   14046 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:31:08.451556   14046 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:31:08.451584   14046 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:31:08.451613   14046 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:31:08.452184   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:31:08.489217   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:31:08.517562   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:31:08.543285   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:31:08.569173   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 11:31:08.594856   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 11:31:08.620538   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:31:08.645986   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 11:31:08.668456   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:31:08.690521   14046 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 11:31:08.706980   14046 ssh_runner.go:195] Run: openssl version
	I1202 11:31:08.712732   14046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:31:08.723032   14046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:31:08.727495   14046 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:31:08.727526   14046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:31:08.733224   14046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:31:08.743680   14046 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:31:08.747664   14046 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:31:08.747708   14046 kubeadm.go:392] StartCluster: {Name:addons-093588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:31:08.747775   14046 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 11:31:08.747814   14046 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 11:31:08.787888   14046 cri.go:89] found id: ""
	I1202 11:31:08.787950   14046 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 11:31:08.797362   14046 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 11:31:08.809451   14046 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 11:31:08.820282   14046 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 11:31:08.820297   14046 kubeadm.go:157] found existing configuration files:
	
	I1202 11:31:08.820333   14046 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 11:31:08.828677   14046 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 11:31:08.828711   14046 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 11:31:08.837308   14046 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 11:31:08.845525   14046 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 11:31:08.845558   14046 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 11:31:08.854180   14046 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 11:31:08.862334   14046 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 11:31:08.862373   14046 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 11:31:08.871107   14046 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 11:31:08.879485   14046 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 11:31:08.879516   14046 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 11:31:08.888193   14046 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 11:31:09.042807   14046 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 11:31:19.640576   14046 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 11:31:19.640684   14046 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 11:31:19.640804   14046 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 11:31:19.640929   14046 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 11:31:19.641054   14046 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 11:31:19.641154   14046 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 11:31:19.642488   14046 out.go:235]   - Generating certificates and keys ...
	I1202 11:31:19.642574   14046 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 11:31:19.642657   14046 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 11:31:19.642746   14046 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 11:31:19.642837   14046 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 11:31:19.642899   14046 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 11:31:19.642942   14046 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 11:31:19.642987   14046 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 11:31:19.643101   14046 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-093588 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
	I1202 11:31:19.643167   14046 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 11:31:19.643335   14046 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-093588 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
	I1202 11:31:19.643411   14046 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 11:31:19.643467   14046 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 11:31:19.643523   14046 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 11:31:19.643615   14046 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 11:31:19.643692   14046 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 11:31:19.643782   14046 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 11:31:19.643865   14046 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 11:31:19.643943   14046 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 11:31:19.643993   14046 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 11:31:19.644064   14046 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 11:31:19.644161   14046 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 11:31:19.645478   14046 out.go:235]   - Booting up control plane ...
	I1202 11:31:19.645576   14046 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 11:31:19.645645   14046 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 11:31:19.645701   14046 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 11:31:19.645795   14046 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 11:31:19.645918   14046 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 11:31:19.645986   14046 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 11:31:19.646132   14046 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 11:31:19.646250   14046 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 11:31:19.646306   14046 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.795667ms
	I1202 11:31:19.646374   14046 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 11:31:19.646429   14046 kubeadm.go:310] [api-check] The API server is healthy after 5.502118168s
	I1202 11:31:19.646515   14046 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 11:31:19.646628   14046 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 11:31:19.646678   14046 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 11:31:19.646851   14046 kubeadm.go:310] [mark-control-plane] Marking the node addons-093588 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 11:31:19.646906   14046 kubeadm.go:310] [bootstrap-token] Using token: 1k1sz6.8l7j2y5vp52tcjwr
	I1202 11:31:19.648784   14046 out.go:235]   - Configuring RBAC rules ...
	I1202 11:31:19.648889   14046 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 11:31:19.648963   14046 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 11:31:19.649092   14046 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 11:31:19.649233   14046 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 11:31:19.649389   14046 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 11:31:19.649475   14046 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 11:31:19.649614   14046 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 11:31:19.649654   14046 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 11:31:19.649717   14046 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 11:31:19.649728   14046 kubeadm.go:310] 
	I1202 11:31:19.649818   14046 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 11:31:19.649829   14046 kubeadm.go:310] 
	I1202 11:31:19.649939   14046 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 11:31:19.649949   14046 kubeadm.go:310] 
	I1202 11:31:19.649985   14046 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 11:31:19.650037   14046 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 11:31:19.650080   14046 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 11:31:19.650086   14046 kubeadm.go:310] 
	I1202 11:31:19.650138   14046 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 11:31:19.650144   14046 kubeadm.go:310] 
	I1202 11:31:19.650183   14046 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 11:31:19.650189   14046 kubeadm.go:310] 
	I1202 11:31:19.650234   14046 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 11:31:19.650304   14046 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 11:31:19.650365   14046 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 11:31:19.650373   14046 kubeadm.go:310] 
	I1202 11:31:19.650440   14046 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 11:31:19.650514   14046 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 11:31:19.650522   14046 kubeadm.go:310] 
	I1202 11:31:19.650595   14046 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1k1sz6.8l7j2y5vp52tcjwr \
	I1202 11:31:19.650697   14046 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 11:31:19.650725   14046 kubeadm.go:310] 	--control-plane 
	I1202 11:31:19.650735   14046 kubeadm.go:310] 
	I1202 11:31:19.650849   14046 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 11:31:19.650859   14046 kubeadm.go:310] 
	I1202 11:31:19.650970   14046 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1k1sz6.8l7j2y5vp52tcjwr \
	I1202 11:31:19.651080   14046 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 11:31:19.651101   14046 cni.go:84] Creating CNI manager for ""
	I1202 11:31:19.651113   14046 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 11:31:19.652324   14046 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 11:31:19.653350   14046 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 11:31:19.663856   14046 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 11:31:19.683635   14046 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 11:31:19.683704   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:19.683726   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-093588 minikube.k8s.io/updated_at=2024_12_02T11_31_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=addons-093588 minikube.k8s.io/primary=true
	I1202 11:31:19.809186   14046 ops.go:34] apiserver oom_adj: -16
	I1202 11:31:19.809308   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:20.309679   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:20.809603   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:21.310155   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:21.809913   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:22.310060   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:22.809479   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:23.309701   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:23.809394   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:24.310391   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:24.414631   14046 kubeadm.go:1113] duration metric: took 4.730985398s to wait for elevateKubeSystemPrivileges
	I1202 11:31:24.414668   14046 kubeadm.go:394] duration metric: took 15.666963518s to StartCluster
	I1202 11:31:24.414689   14046 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:24.414816   14046 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:31:24.415263   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:24.415607   14046 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:31:24.415637   14046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 11:31:24.415683   14046 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 11:31:24.415803   14046 addons.go:69] Setting inspektor-gadget=true in profile "addons-093588"
	I1202 11:31:24.415812   14046 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-093588"
	I1202 11:31:24.415821   14046 addons.go:234] Setting addon inspektor-gadget=true in "addons-093588"
	I1202 11:31:24.415825   14046 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-093588"
	I1202 11:31:24.415833   14046 addons.go:69] Setting storage-provisioner=true in profile "addons-093588"
	I1202 11:31:24.415851   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.415862   14046 addons.go:234] Setting addon storage-provisioner=true in "addons-093588"
	I1202 11:31:24.415871   14046 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-093588"
	I1202 11:31:24.415888   14046 config.go:182] Loaded profile config "addons-093588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:31:24.415899   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.415899   14046 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-093588"
	I1202 11:31:24.415926   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.415938   14046 addons.go:69] Setting metrics-server=true in profile "addons-093588"
	I1202 11:31:24.415951   14046 addons.go:234] Setting addon metrics-server=true in "addons-093588"
	I1202 11:31:24.415975   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.415801   14046 addons.go:69] Setting yakd=true in profile "addons-093588"
	I1202 11:31:24.416328   14046 addons.go:69] Setting volcano=true in profile "addons-093588"
	I1202 11:31:24.416332   14046 addons.go:234] Setting addon yakd=true in "addons-093588"
	I1202 11:31:24.416340   14046 addons.go:234] Setting addon volcano=true in "addons-093588"
	I1202 11:31:24.416348   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416362   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416364   14046 addons.go:69] Setting volumesnapshots=true in profile "addons-093588"
	I1202 11:31:24.416354   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416376   14046 addons.go:234] Setting addon volumesnapshots=true in "addons-093588"
	I1202 11:31:24.416348   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416391   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416392   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.416393   14046 addons.go:69] Setting registry=true in profile "addons-093588"
	I1202 11:31:24.416405   14046 addons.go:234] Setting addon registry=true in "addons-093588"
	I1202 11:31:24.416412   14046 addons.go:69] Setting cloud-spanner=true in profile "addons-093588"
	I1202 11:31:24.416416   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416426   14046 addons.go:234] Setting addon cloud-spanner=true in "addons-093588"
	I1202 11:31:24.416405   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416426   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.416450   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416483   14046 addons.go:69] Setting gcp-auth=true in profile "addons-093588"
	I1202 11:31:24.416504   14046 addons.go:69] Setting ingress=true in profile "addons-093588"
	I1202 11:31:24.416357   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.416517   14046 addons.go:234] Setting addon ingress=true in "addons-093588"
	I1202 11:31:24.416519   14046 mustload.go:65] Loading cluster: addons-093588
	I1202 11:31:24.416532   14046 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-093588"
	I1202 11:31:24.416560   14046 addons.go:69] Setting default-storageclass=true in profile "addons-093588"
	I1202 11:31:24.416586   14046 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-093588"
	I1202 11:31:24.416590   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416632   14046 addons.go:69] Setting ingress-dns=true in profile "addons-093588"
	I1202 11:31:24.416650   14046 addons.go:234] Setting addon ingress-dns=true in "addons-093588"
	I1202 11:31:24.416682   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.416780   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416807   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416363   14046 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-093588"
	I1202 11:31:24.416859   14046 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-093588"
	I1202 11:31:24.416872   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416890   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416900   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416439   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416920   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416352   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.416565   14046 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-093588"
	I1202 11:31:24.416997   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.417021   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.417031   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.417043   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.417073   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.417115   14046 config.go:182] Loaded profile config "addons-093588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:31:24.417236   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.417280   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.417310   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.417478   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.417508   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.417581   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.417639   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.417599   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.417818   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.418318   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.418354   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.418457   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.418520   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.419789   14046 out.go:177] * Verifying Kubernetes components...
	I1202 11:31:24.421204   14046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:31:24.434933   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I1202 11:31:24.456366   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I1202 11:31:24.456379   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36237
	I1202 11:31:24.456385   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
	I1202 11:31:24.456522   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I1202 11:31:24.457088   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.457138   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.457205   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.458101   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.458260   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.458270   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.458324   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.458368   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.457145   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.458977   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.458999   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.459125   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.459137   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.459192   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.459310   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.459320   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.459719   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.459746   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.461092   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.461110   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.461162   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.461200   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.461239   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.465673   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.466074   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.466128   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.466547   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.466716   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.466745   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.466905   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.466931   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.471312   14046 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-093588"
	I1202 11:31:24.471365   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.471748   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.471776   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.496148   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46685
	I1202 11:31:24.496823   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.496855   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I1202 11:31:24.497309   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.497323   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.497591   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.497750   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.497825   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.498357   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.498378   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.498441   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
	I1202 11:31:24.498718   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41033
	I1202 11:31:24.498914   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.499256   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.499500   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.499512   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.500301   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.500723   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.500766   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.500873   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.500903   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.500931   14046 addons.go:234] Setting addon default-storageclass=true in "addons-093588"
	I1202 11:31:24.501152   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.501175   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.501511   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.501552   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.501848   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.501865   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.502157   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46689
	I1202 11:31:24.502338   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.502394   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I1202 11:31:24.504903   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I1202 11:31:24.504935   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.504948   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I1202 11:31:24.504909   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46127
	I1202 11:31:24.505440   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.505449   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.505787   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.505960   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.505975   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.506128   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.506144   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.506220   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35429
	I1202 11:31:24.506436   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.506455   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.506465   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.506589   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.506809   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.506836   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.506857   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.506898   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.507198   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.507202   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.507232   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.507197   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.507262   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.507386   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.507402   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.507680   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.507716   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.507930   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.507970   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.508215   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.508266   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.508349   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.508886   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.508912   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.509328   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.509388   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.509745   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.509772   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.514152   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I1202 11:31:24.514536   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.515551   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.515567   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.515900   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.516060   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.516750   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.516781   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.518417   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.520369   14046 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 11:31:24.521684   14046 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:31:24.521708   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 11:31:24.521726   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.521971   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40347
	I1202 11:31:24.522467   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.522948   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.522967   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.523428   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.524037   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.524073   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.525278   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.525908   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I1202 11:31:24.525964   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.525982   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.526153   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.526308   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37297
	I1202 11:31:24.526334   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.526703   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.526823   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.527519   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I1202 11:31:24.528000   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I1202 11:31:24.538245   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44681
	I1202 11:31:24.538353   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34773
	I1202 11:31:24.538752   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.539216   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.539400   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.539426   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.539696   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.539718   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.539894   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
	I1202 11:31:24.540083   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.540314   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.540385   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.540470   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I1202 11:31:24.540924   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.540946   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.541203   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.541339   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.541664   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.541754   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.542181   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.542207   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.542260   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.542496   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41727
	I1202 11:31:24.542555   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.542776   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.543259   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.543292   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.543731   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.544610   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.544628   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.544664   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.545126   14046 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1202 11:31:24.545168   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.545195   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.545269   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 11:31:24.545304   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.545395   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.545435   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.545470   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I1202 11:31:24.545618   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.546338   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.546345   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.546363   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.546433   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.546568   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.546647   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.547233   14046 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1202 11:31:24.547248   14046 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1202 11:31:24.547254   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.547267   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.547393   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.547414   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.547445   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.547394   14046 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1202 11:31:24.547458   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.547828   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.547575   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.547955   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.547954   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.547972   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.548039   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.548201   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1202 11:31:24.548709   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.548404   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.548416   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.548427   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.549098   14046 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1202 11:31:24.549523   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.549556   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.549153   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.549174   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.549198   14046 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 11:31:24.550744   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.550442   14046 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1202 11:31:24.550949   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 11:31:24.550979   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.551266   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:24.551295   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:24.551395   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.551754   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.551922   14046 out.go:177]   - Using image docker.io/registry:2.8.3
	I1202 11:31:24.553135   14046 out.go:177]   - Using image docker.io/busybox:stable
	I1202 11:31:24.553248   14046 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 11:31:24.553258   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 11:31:24.553274   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.551866   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 11:31:24.554056   14046 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I1202 11:31:24.554077   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:24.554099   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:24.554115   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:24.554335   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:24.554353   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	W1202 11:31:24.554435   14046 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 11:31:24.554726   14046 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 11:31:24.554748   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 11:31:24.554766   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.555827   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I1202 11:31:24.556405   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 11:31:24.556577   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.557402   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.557420   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.558057   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.558321   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 11:31:24.559321   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.559334   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.559360   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.559379   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.559664   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.559799   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.559949   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.559958   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.560013   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.560428   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.560586   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 11:31:24.561196   14046 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1202 11:31:24.561215   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 11:31:24.562449   14046 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 11:31:24.562471   14046 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 11:31:24.562476   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 11:31:24.562492   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.562522   14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 11:31:24.562532   14046 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 11:31:24.562547   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.562576   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.563149   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.564463   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.564709   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 11:31:24.565746   14046 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1202 11:31:24.565810   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 11:31:24.565819   14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 11:31:24.565837   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.566364   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.566848   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.566871   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.566945   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.567112   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.567307   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.567663   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.567855   14046 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1202 11:31:24.567997   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.569106   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.569689   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.569721   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.569864   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.570008   14046 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1202 11:31:24.570020   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.570160   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.570283   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.570565   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.570706   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.570730   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.571250   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.571276   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.571330   14046 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 11:31:24.571343   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 11:31:24.571359   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.571508   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.571529   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.571688   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.571705   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.572025   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.572133   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.572305   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.572362   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.572452   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.572512   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.572564   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.572615   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.572667   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.572746   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.572801   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.572834   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.573137   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.573154   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.573204   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.573482   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.573956   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40303
	I1202 11:31:24.574649   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.575018   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.575407   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.575423   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.575488   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.575503   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.575585   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.575718   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.575817   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.575932   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.576165   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.576573   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.578063   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I1202 11:31:24.578225   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.578617   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.579075   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.579092   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.579154   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I1202 11:31:24.579518   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.579676   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.579703   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.579828   14046 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1202 11:31:24.580453   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.580474   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.580807   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.580980   14046 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 11:31:24.580998   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 11:31:24.581005   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.581024   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.583031   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.583283   14046 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 11:31:24.583297   14046 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 11:31:24.583313   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.583668   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I1202 11:31:24.583768   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37561
	I1202 11:31:24.584191   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.584544   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.584976   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.584990   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.585105   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.585117   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.585563   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.586229   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.586400   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.586629   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.586648   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.586682   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.586726   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.587020   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.587065   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.587113   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.587132   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.587150   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.587302   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.587360   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.587553   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.587779   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.587937   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.588096   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	W1202 11:31:24.588952   14046 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36208->192.168.39.203:22: read: connection reset by peer
	I1202 11:31:24.588979   14046 retry.go:31] will retry after 162.447336ms: ssh: handshake failed: read tcp 192.168.39.1:36208->192.168.39.203:22: read: connection reset by peer
	I1202 11:31:24.589096   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.589515   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.591066   14046 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 11:31:24.591067   14046 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 11:31:24.592118   14046 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 11:31:24.592128   14046 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 11:31:24.592143   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.592191   14046 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 11:31:24.592198   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 11:31:24.592207   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.595378   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.595644   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I1202 11:31:24.595803   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.595822   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.595882   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.596063   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.596170   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.596184   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.596388   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.596513   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.596802   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.596819   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.596932   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.596948   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	W1202 11:31:24.597052   14046 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36224->192.168.39.203:22: read: connection reset by peer
	I1202 11:31:24.597073   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.597073   14046 retry.go:31] will retry after 286.075051ms: ssh: handshake failed: read tcp 192.168.39.1:36224->192.168.39.203:22: read: connection reset by peer
	I1202 11:31:24.597103   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.597222   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.597240   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.597394   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.597488   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	W1202 11:31:24.598040   14046 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36232->192.168.39.203:22: read: connection reset by peer
	I1202 11:31:24.598119   14046 retry.go:31] will retry after 354.610148ms: ssh: handshake failed: read tcp 192.168.39.1:36232->192.168.39.203:22: read: connection reset by peer
	I1202 11:31:24.598499   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.599979   14046 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1202 11:31:24.601395   14046 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 11:31:24.601408   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1202 11:31:24.601419   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.603772   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.604008   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.604034   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.604262   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.604434   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.604557   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.604666   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.838994   14046 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:31:24.839176   14046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 11:31:24.858733   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 11:31:24.883859   14046 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 11:31:24.883887   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 11:31:24.906094   14046 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 11:31:24.906113   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1202 11:31:24.933933   14046 node_ready.go:35] waiting up to 6m0s for node "addons-093588" to be "Ready" ...
	I1202 11:31:24.937202   14046 node_ready.go:49] node "addons-093588" has status "Ready":"True"
	I1202 11:31:24.937231   14046 node_ready.go:38] duration metric: took 3.246311ms for node "addons-093588" to be "Ready" ...
	I1202 11:31:24.937242   14046 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:31:24.944817   14046 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:24.959764   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:31:24.974400   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 11:31:25.028401   14046 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 11:31:25.028429   14046 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 11:31:25.064822   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 11:31:25.066238   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 11:31:25.067275   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 11:31:25.094521   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 11:31:25.096473   14046 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 11:31:25.096494   14046 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 11:31:25.120768   14046 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 11:31:25.120785   14046 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 11:31:25.127040   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 11:31:25.127059   14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 11:31:25.143070   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 11:31:25.211041   14046 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 11:31:25.211067   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 11:31:25.224348   14046 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 11:31:25.224377   14046 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 11:31:25.306922   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 11:31:25.306951   14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 11:31:25.312328   14046 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 11:31:25.312353   14046 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 11:31:25.354346   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 11:31:25.367695   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 11:31:25.440430   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 11:31:25.489263   14046 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 11:31:25.489288   14046 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 11:31:25.494108   14046 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 11:31:25.494123   14046 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 11:31:25.505892   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 11:31:25.505913   14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 11:31:25.736316   14046 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 11:31:25.736339   14046 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 11:31:25.747519   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 11:31:25.747550   14046 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 11:31:25.785153   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 11:31:25.785175   14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 11:31:26.043257   14046 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 11:31:26.043281   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 11:31:26.066545   14046 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 11:31:26.066566   14046 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 11:31:26.144474   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 11:31:26.144499   14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 11:31:26.257832   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 11:31:26.275811   14046 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 11:31:26.275838   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 11:31:26.434738   14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 11:31:26.434762   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 11:31:26.548682   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 11:31:26.657205   14046 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.81798875s)
	I1202 11:31:26.657245   14046 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1202 11:31:26.856310   14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 11:31:26.856338   14046 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 11:31:26.953933   14046 pod_ready.go:103] pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:27.167644   14046 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-093588" context rescaled to 1 replicas
	I1202 11:31:27.259915   14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 11:31:27.259936   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 11:31:27.322490   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.463722869s)
	I1202 11:31:27.322546   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:27.322564   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:27.322869   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:27.322892   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:27.322905   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:27.322920   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:27.322928   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:27.323166   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:27.323183   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:27.562371   14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 11:31:27.562393   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 11:31:27.852297   14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 11:31:27.852375   14046 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1202 11:31:28.172709   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 11:31:28.882091   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.922291512s)
	I1202 11:31:28.882152   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:28.882167   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:28.882444   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:28.882462   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:28.882477   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:28.882489   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:28.882787   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:28.882836   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.144693   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.170258844s)
	I1202 11:31:29.144737   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.079888384s)
	I1202 11:31:29.144756   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.144770   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.144799   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.078529681s)
	I1202 11:31:29.144757   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.144846   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.144852   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.07756002s)
	I1202 11:31:29.144846   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.144869   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.144875   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.144879   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.145322   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145331   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145341   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145346   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145353   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.145349   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.145356   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.145375   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.145388   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145396   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145403   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.145410   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.145364   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.145442   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145449   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145457   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.145463   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.145509   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.145538   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.145569   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145631   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145731   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145771   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145586   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.145602   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145796   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145408   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.145753   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.147160   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.147164   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.147221   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.196163   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.196191   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.196457   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.196478   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	W1202 11:31:29.196562   14046 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1202 11:31:29.226575   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.226598   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.226874   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.226906   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.226912   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.494215   14046 pod_ready.go:103] pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:29.630127   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.535563804s)
	I1202 11:31:29.630190   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.630203   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.630516   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.630567   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.630585   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.630598   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.630831   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.630845   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.630876   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:31.078617   14046 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:31.078643   14046 pod_ready.go:82] duration metric: took 6.133804282s for pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:31.078656   14046 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sh425" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:31.604453   14046 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 11:31:31.604494   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:31.607456   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:31.607858   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:31.607881   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:31.608126   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:31.608355   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:31.608517   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:31.608723   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:32.118212   14046 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 11:31:32.349435   14046 addons.go:234] Setting addon gcp-auth=true in "addons-093588"
	I1202 11:31:32.349504   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:32.349844   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:32.349891   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:32.364178   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I1202 11:31:32.364679   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:32.365165   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:32.365189   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:32.365508   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:32.366128   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:32.366180   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:32.380001   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I1202 11:31:32.380476   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:32.380998   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:32.381020   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:32.381308   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:32.381494   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:32.382817   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:32.382990   14046 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 11:31:32.383016   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:32.385576   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:32.385914   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:32.385938   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:32.386042   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:32.386234   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:32.386377   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:32.386522   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:32.584928   14046 pod_ready.go:93] pod "coredns-7c65d6cfc9-sh425" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:32.584947   14046 pod_ready.go:82] duration metric: took 1.506285543s for pod "coredns-7c65d6cfc9-sh425" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.584957   14046 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.592458   14046 pod_ready.go:93] pod "etcd-addons-093588" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:32.592477   14046 pod_ready.go:82] duration metric: took 7.514441ms for pod "etcd-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.592489   14046 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.601795   14046 pod_ready.go:93] pod "kube-apiserver-addons-093588" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:32.601819   14046 pod_ready.go:82] duration metric: took 9.321566ms for pod "kube-apiserver-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.601831   14046 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.611257   14046 pod_ready.go:93] pod "kube-controller-manager-addons-093588" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:32.611277   14046 pod_ready.go:82] duration metric: took 9.438391ms for pod "kube-controller-manager-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.611290   14046 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8bqbx" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.628996   14046 pod_ready.go:93] pod "kube-proxy-8bqbx" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:32.629014   14046 pod_ready.go:82] duration metric: took 17.716285ms for pod "kube-proxy-8bqbx" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.629025   14046 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:33.216468   14046 pod_ready.go:93] pod "kube-scheduler-addons-093588" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:33.216492   14046 pod_ready.go:82] duration metric: took 587.459361ms for pod "kube-scheduler-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:33.216500   14046 pod_ready.go:39] duration metric: took 8.279244651s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:31:33.216514   14046 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:31:33.216560   14046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:31:33.790935   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.647826005s)
	I1202 11:31:33.791001   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791005   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.436618964s)
	I1202 11:31:33.791013   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791043   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791055   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791071   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.423342397s)
	I1202 11:31:33.791100   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791119   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791168   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.35070535s)
	I1202 11:31:33.791202   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791213   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791317   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.533440621s)
	W1202 11:31:33.791345   14046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 11:31:33.791371   14046 retry.go:31] will retry after 369.700432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 11:31:33.791378   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791400   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791427   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.242721006s)
	I1202 11:31:33.791438   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791437   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791445   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791446   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.791450   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.791468   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791476   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791488   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791454   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791521   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791456   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791549   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791493   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791527   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791567   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.791575   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791531   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.791584   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791586   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791593   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791875   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791887   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791898   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.791905   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791907   14046 addons.go:475] Verifying addon metrics-server=true in "addons-093588"
	I1202 11:31:33.791912   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.791950   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791969   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791987   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791994   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.792000   14046 addons.go:475] Verifying addon ingress=true in "addons-093588"
	I1202 11:31:33.792201   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.792244   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.792252   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.792260   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.792268   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.793186   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.793210   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.793216   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.793370   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.793378   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.793386   14046 addons.go:475] Verifying addon registry=true in "addons-093588"
	I1202 11:31:33.794805   14046 out.go:177] * Verifying ingress addon...
	I1202 11:31:33.794865   14046 out.go:177] * Verifying registry addon...
	I1202 11:31:33.794866   14046 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-093588 service yakd-dashboard -n yakd-dashboard
	
	I1202 11:31:33.797060   14046 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 11:31:33.797129   14046 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 11:31:33.824422   14046 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 11:31:33.824444   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:33.824727   14046 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 11:31:33.824743   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:34.161790   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 11:31:34.343123   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:34.350325   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:34.694164   14046 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.311156085s)
	I1202 11:31:34.694245   14046 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.477671289s)
	I1202 11:31:34.694273   14046 api_server.go:72] duration metric: took 10.278626446s to wait for apiserver process to appear ...
	I1202 11:31:34.694284   14046 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:31:34.694305   14046 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I1202 11:31:34.694162   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.521398959s)
	I1202 11:31:34.694468   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:34.694493   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:34.694736   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:34.694753   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:34.694762   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:34.694769   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:34.694740   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:34.694983   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:34.694992   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:34.695001   14046 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-093588"
	I1202 11:31:34.695677   14046 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1202 11:31:34.696349   14046 out.go:177] * Verifying csi-hostpath-driver addon...
	I1202 11:31:34.697823   14046 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 11:31:34.698942   14046 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 11:31:34.698953   14046 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 11:31:34.699027   14046 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 11:31:34.717379   14046 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I1202 11:31:34.733562   14046 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 11:31:34.733577   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:34.734181   14046 api_server.go:141] control plane version: v1.31.2
	I1202 11:31:34.734195   14046 api_server.go:131] duration metric: took 39.902425ms to wait for apiserver health ...
	I1202 11:31:34.734202   14046 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:31:34.772298   14046 system_pods.go:59] 19 kube-system pods found
	I1202 11:31:34.772342   14046 system_pods.go:61] "amd-gpu-device-plugin-9x4xz" [55df6bd8-36c5-4864-8918-ac9425f2f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 11:31:34.772353   14046 system_pods.go:61] "coredns-7c65d6cfc9-5lcqk" [4d7cf83e-5dd7-42fb-982f-a45f12d7a40b] Running
	I1202 11:31:34.772365   14046 system_pods.go:61] "coredns-7c65d6cfc9-sh425" [749fc6c5-7fb8-4660-876f-15b8c46c2e50] Running
	I1202 11:31:34.772376   14046 system_pods.go:61] "csi-hostpath-attacher-0" [9090d43f-db00-4d9f-a761-7e784e7d66e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 11:31:34.772391   14046 system_pods.go:61] "csi-hostpath-resizer-0" [eacac2d8-005d-4f85-aa5f-5ee6725473a4] Pending
	I1202 11:31:34.772405   14046 system_pods.go:61] "csi-hostpathplugin-jtbvg" [5558e993-a5eb-47db-b72e-028a2df87321] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 11:31:34.772416   14046 system_pods.go:61] "etcd-addons-093588" [133711db-b531-4f45-b56d-d479fc0d3bf2] Running
	I1202 11:31:34.772427   14046 system_pods.go:61] "kube-apiserver-addons-093588" [4fa270b4-87bc-41ea-9c7e-d194a6a7a8dd] Running
	I1202 11:31:34.772438   14046 system_pods.go:61] "kube-controller-manager-addons-093588" [b742eb2a-db16-4d33-8520-0bbb9c083127] Running
	I1202 11:31:34.772452   14046 system_pods.go:61] "kube-ingress-dns-minikube" [93d2e4da-4868-4b1e-9718-bcc404d49f31] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 11:31:34.772462   14046 system_pods.go:61] "kube-proxy-8bqbx" [f637fa3b-3c50-489d-b864-5477922486f8] Running
	I1202 11:31:34.772473   14046 system_pods.go:61] "kube-scheduler-addons-093588" [115de73f-014e-43eb-bf1c-4294dc736871] Running
	I1202 11:31:34.772486   14046 system_pods.go:61] "metrics-server-84c5f94fbc-z5r8x" [b4ffaa02-f311-4afa-9113-ac7a8b7b5828] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 11:31:34.772500   14046 system_pods.go:61] "nvidia-device-plugin-daemonset-zprhh" [1292e790-4f25-49e8-a26d-3925b308ef53] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 11:31:34.772515   14046 system_pods.go:61] "registry-66c9cd494c-4dmpv" [4ba754ca-3bc4-4639-bbf2-9d771c422d1f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 11:31:34.772529   14046 system_pods.go:61] "registry-proxy-84nx4" [d2473044-c394-4b78-8583-763661c9c329] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 11:31:34.772544   14046 system_pods.go:61] "snapshot-controller-56fcc65765-5684m" [1b9feacd-f2e4-41f7-abc9-06e472d66f0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 11:31:34.772558   14046 system_pods.go:61] "snapshot-controller-56fcc65765-dj6kc" [ea0e750d-7300-4238-9443-627b04eb650d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 11:31:34.772570   14046 system_pods.go:61] "storage-provisioner" [90465e3b-c05f-4fff-a0f6-c6a8b7703e89] Running
	I1202 11:31:34.772583   14046 system_pods.go:74] duration metric: took 38.374545ms to wait for pod list to return data ...
	I1202 11:31:34.772598   14046 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:31:34.779139   14046 default_sa.go:45] found service account: "default"
	I1202 11:31:34.779155   14046 default_sa.go:55] duration metric: took 6.550708ms for default service account to be created ...
	I1202 11:31:34.779163   14046 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:31:34.807767   14046 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 11:31:34.807791   14046 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 11:31:34.811811   14046 system_pods.go:86] 19 kube-system pods found
	I1202 11:31:34.811834   14046 system_pods.go:89] "amd-gpu-device-plugin-9x4xz" [55df6bd8-36c5-4864-8918-ac9425f2f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 11:31:34.811839   14046 system_pods.go:89] "coredns-7c65d6cfc9-5lcqk" [4d7cf83e-5dd7-42fb-982f-a45f12d7a40b] Running
	I1202 11:31:34.811846   14046 system_pods.go:89] "coredns-7c65d6cfc9-sh425" [749fc6c5-7fb8-4660-876f-15b8c46c2e50] Running
	I1202 11:31:34.811851   14046 system_pods.go:89] "csi-hostpath-attacher-0" [9090d43f-db00-4d9f-a761-7e784e7d66e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 11:31:34.811862   14046 system_pods.go:89] "csi-hostpath-resizer-0" [eacac2d8-005d-4f85-aa5f-5ee6725473a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 11:31:34.811871   14046 system_pods.go:89] "csi-hostpathplugin-jtbvg" [5558e993-a5eb-47db-b72e-028a2df87321] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 11:31:34.811874   14046 system_pods.go:89] "etcd-addons-093588" [133711db-b531-4f45-b56d-d479fc0d3bf2] Running
	I1202 11:31:34.811878   14046 system_pods.go:89] "kube-apiserver-addons-093588" [4fa270b4-87bc-41ea-9c7e-d194a6a7a8dd] Running
	I1202 11:31:34.811882   14046 system_pods.go:89] "kube-controller-manager-addons-093588" [b742eb2a-db16-4d33-8520-0bbb9c083127] Running
	I1202 11:31:34.811890   14046 system_pods.go:89] "kube-ingress-dns-minikube" [93d2e4da-4868-4b1e-9718-bcc404d49f31] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 11:31:34.811893   14046 system_pods.go:89] "kube-proxy-8bqbx" [f637fa3b-3c50-489d-b864-5477922486f8] Running
	I1202 11:31:34.811900   14046 system_pods.go:89] "kube-scheduler-addons-093588" [115de73f-014e-43eb-bf1c-4294dc736871] Running
	I1202 11:31:34.811907   14046 system_pods.go:89] "metrics-server-84c5f94fbc-z5r8x" [b4ffaa02-f311-4afa-9113-ac7a8b7b5828] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 11:31:34.811912   14046 system_pods.go:89] "nvidia-device-plugin-daemonset-zprhh" [1292e790-4f25-49e8-a26d-3925b308ef53] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 11:31:34.811920   14046 system_pods.go:89] "registry-66c9cd494c-4dmpv" [4ba754ca-3bc4-4639-bbf2-9d771c422d1f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 11:31:34.811925   14046 system_pods.go:89] "registry-proxy-84nx4" [d2473044-c394-4b78-8583-763661c9c329] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 11:31:34.811930   14046 system_pods.go:89] "snapshot-controller-56fcc65765-5684m" [1b9feacd-f2e4-41f7-abc9-06e472d66f0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 11:31:34.811935   14046 system_pods.go:89] "snapshot-controller-56fcc65765-dj6kc" [ea0e750d-7300-4238-9443-627b04eb650d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 11:31:34.811941   14046 system_pods.go:89] "storage-provisioner" [90465e3b-c05f-4fff-a0f6-c6a8b7703e89] Running
	I1202 11:31:34.811947   14046 system_pods.go:126] duration metric: took 32.779668ms to wait for k8s-apps to be running ...
	I1202 11:31:34.811953   14046 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:31:34.811993   14046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:31:34.814772   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:34.814898   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:34.865148   14046 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 11:31:34.865170   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 11:31:34.910684   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 11:31:35.212476   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:35.302270   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:35.306145   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:35.704047   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:35.804040   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:35.804460   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:35.906004   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.744152206s)
	I1202 11:31:35.906055   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:35.906063   14046 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.094047231s)
	I1202 11:31:35.906092   14046 system_svc.go:56] duration metric: took 1.094134923s WaitForService to wait for kubelet
	I1202 11:31:35.906107   14046 kubeadm.go:582] duration metric: took 11.490458054s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:31:35.906141   14046 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:31:35.906072   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:35.906478   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:35.906510   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:35.906522   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:35.906529   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:35.906722   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:35.906735   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:35.909515   14046 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:31:35.909536   14046 node_conditions.go:123] node cpu capacity is 2
	I1202 11:31:35.909545   14046 node_conditions.go:105] duration metric: took 3.397157ms to run NodePressure ...
	I1202 11:31:35.909555   14046 start.go:241] waiting for startup goroutines ...
	I1202 11:31:36.207546   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:36.311696   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:36.323552   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:36.524594   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.613849544s)
	I1202 11:31:36.524666   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:36.524682   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:36.525003   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:36.525022   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:36.525036   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:36.525064   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:36.525075   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:36.525318   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:36.525334   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:36.525348   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:36.526230   14046 addons.go:475] Verifying addon gcp-auth=true in "addons-093588"
	I1202 11:31:36.528737   14046 out.go:177] * Verifying gcp-auth addon...
	I1202 11:31:36.530986   14046 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 11:31:36.578001   14046 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 11:31:36.578020   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:36.704649   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:36.809208   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:36.809895   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:37.037424   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:37.203141   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:37.301723   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:37.302535   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:37.535104   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:37.703267   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:37.802335   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:37.802610   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:38.036909   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:38.204479   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:38.301632   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:38.302255   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:38.534810   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:38.704658   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:38.802708   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:38.803554   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:39.037174   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:39.292617   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:39.392307   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:39.392645   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:39.535333   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:39.704929   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:39.802557   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:39.803397   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:40.035299   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:40.205429   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:40.301785   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:40.301851   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:40.535337   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:40.703275   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:40.800655   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:40.801812   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:41.034994   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:41.204157   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:41.302831   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:41.303262   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:41.535151   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:41.703985   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:41.801319   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:41.801446   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:42.034352   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:42.203443   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:42.302890   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:42.304166   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:42.535013   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:42.703286   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:42.800816   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:42.801395   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:43.035672   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:43.203886   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:43.300980   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:43.301410   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:43.535388   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:43.704078   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:43.801008   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:43.801871   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:44.035750   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:44.241245   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:44.303030   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:44.303402   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:44.535189   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:44.704145   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:44.802535   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:44.803477   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:45.035547   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:45.205121   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:45.302246   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:45.306235   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:45.534465   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:45.703630   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:45.801940   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:45.802281   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:46.035662   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:46.203259   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:46.302067   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:46.302106   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:46.534762   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:46.703700   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:46.800864   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:46.802040   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:47.036727   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:47.204080   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:47.301844   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:47.301978   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:47.534983   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:47.704106   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:47.801707   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:47.803397   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:48.035137   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:48.203099   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:48.301547   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:48.301783   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:48.533891   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:48.703958   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:48.800958   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:48.801440   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:49.034561   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:49.204427   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:49.300634   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:49.301040   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:49.796093   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:49.796650   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:49.894974   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:49.895409   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:50.035131   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:50.205221   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:50.303043   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:50.303481   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:50.534978   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:50.704273   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:50.801772   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:50.801913   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:51.036221   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:51.202958   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:51.301672   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:51.303883   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:51.535974   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:51.705307   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:51.801763   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:51.802054   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:52.034979   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:52.204086   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:52.304301   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:52.305641   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:52.535427   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:52.704315   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:52.802423   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:52.802894   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:53.034594   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:53.204339   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:53.303653   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:53.306254   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:53.535883   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:53.704290   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:53.801531   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:53.802072   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:54.117303   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:54.203910   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:54.302087   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:54.302794   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:54.535306   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:54.703953   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:54.801915   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:54.801935   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:55.035228   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:55.203582   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:55.301814   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:55.302766   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:55.534254   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:55.703526   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:55.801462   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:55.801784   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:56.034736   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:56.204957   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:56.302824   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:56.303171   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:56.535416   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:56.704209   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:56.800476   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:56.802007   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:57.034734   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:57.204149   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:57.301587   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:57.302347   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:57.534833   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:57.704817   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:57.802147   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:57.802493   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:58.034493   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:58.203588   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:58.301828   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:58.302488   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:58.534315   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:58.705874   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:58.801208   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:58.802117   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:59.035206   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:59.204016   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:59.300680   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:59.301228   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:59.534267   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:59.703462   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:59.802411   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:59.805743   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:00.034944   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:00.205868   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:00.302403   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:00.302619   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:00.535930   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:00.705347   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:00.802373   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:00.802691   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:01.034165   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:01.203083   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:01.302108   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:01.302231   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:01.534962   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:01.704177   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:01.800790   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:01.801125   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:02.035522   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:02.207255   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:02.305529   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:02.305891   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:02.535277   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:02.703940   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:02.801885   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:02.801903   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:03.035451   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:03.203573   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:03.302065   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:03.302261   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:03.535720   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:03.703935   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:03.800844   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:03.801307   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:04.035517   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:04.209494   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:04.301432   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:04.302504   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:04.534911   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:04.703576   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:04.803619   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:04.804099   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:05.037027   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:05.204348   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:05.304406   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:05.305143   14046 kapi.go:107] duration metric: took 31.508010049s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 11:32:05.539056   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:05.704700   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:05.804304   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:06.039817   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:06.205353   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:06.310095   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:06.534977   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:06.704090   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:06.800726   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:07.035759   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:07.204852   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:07.301177   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:07.534942   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:07.703430   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:07.801253   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:08.035545   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:08.203485   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:08.304272   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:08.535354   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:08.703653   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:08.801345   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:09.035283   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:09.203667   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:09.301315   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:09.534575   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:09.708677   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:09.801812   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:10.034861   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:10.204571   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:10.685014   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:10.785858   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:10.786536   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:10.800928   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:11.034660   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:11.203914   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:11.303391   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:11.535680   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:11.704751   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:11.805498   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:12.043914   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:12.203937   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:12.301289   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:12.536468   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:13.048324   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:13.048675   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:13.048713   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:13.206976   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:13.306351   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:13.535323   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:13.704264   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:13.804182   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:14.035842   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:14.208917   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:14.301365   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:14.535026   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:14.703588   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:14.801725   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:15.034610   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:15.204327   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:15.304934   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:15.534739   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:15.704785   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:15.801778   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:16.034504   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:16.204196   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:16.630650   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:16.632171   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:16.703056   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:16.801188   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:17.034638   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:17.204193   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:17.305590   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:17.537824   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:17.703501   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:17.801783   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:18.274930   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:18.277014   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:18.324560   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:18.536509   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:18.704072   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:18.801749   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:19.036866   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:19.203700   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:19.305338   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:19.534946   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:19.703543   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:19.801503   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:20.033851   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:20.204394   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:20.301489   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:20.534043   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:20.704035   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:20.802048   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:21.035351   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:21.204075   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:21.304623   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:21.534698   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:21.703740   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:21.800941   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:22.035176   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:22.204538   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:22.303225   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:22.535611   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:22.703682   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:22.802117   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:23.379807   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:23.382707   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:23.383795   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:23.537984   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:23.707670   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:23.801120   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:24.035076   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:24.205347   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:24.301126   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:24.535567   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:24.703844   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:24.801658   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:25.035126   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:25.205250   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:25.302531   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:25.535923   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:25.703680   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:25.801499   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:26.034235   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:26.204524   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:26.301216   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:26.534899   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:26.703670   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:26.801160   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:27.034705   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:27.209222   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:27.311879   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:27.551203   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:27.706021   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:27.804614   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:28.035342   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:28.203667   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:28.301793   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:28.544354   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:28.711784   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:28.810267   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:29.034649   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:29.204547   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:29.301152   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:29.534413   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:29.704108   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:29.802865   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:30.035779   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:30.204665   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:30.304717   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:30.534685   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:30.703851   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:30.802376   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:31.037512   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:31.544834   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:31.545362   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:31.557069   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:31.706516   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:31.807268   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:32.034741   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:32.204171   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:32.301464   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:32.534454   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:32.704155   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:32.801829   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:33.034795   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:33.203510   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:33.306267   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:33.536390   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:33.708085   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:33.802088   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:34.034963   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:34.204776   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:34.308108   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:34.536044   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:34.703641   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:34.804438   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:35.035343   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:35.203465   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:35.303592   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:35.535810   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:35.709085   14046 kapi.go:107] duration metric: took 1m1.010057933s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 11:32:35.802151   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:36.035498   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:36.301273   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:36.534659   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:36.801419   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:37.035446   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:37.301607   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:37.534705   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:37.803178   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:38.035229   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:38.301283   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:38.535357   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:38.801506   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:39.035756   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:39.303845   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:39.536507   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:39.803141   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:40.035121   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:40.308205   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:40.535897   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:40.803283   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:41.035083   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:41.302929   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:41.534524   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:41.801381   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:42.035509   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:42.301517   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:42.534206   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:42.801696   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:43.363908   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:43.367795   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:43.534145   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:43.801034   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:44.035075   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:44.301680   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:44.535413   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:44.802593   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:45.036399   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:45.303689   14046 kapi.go:107] duration metric: took 1m11.506622692s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 11:32:45.534723   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:46.035278   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:46.534932   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:47.034975   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:47.535739   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:48.034856   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:48.535985   14046 kapi.go:107] duration metric: took 1m12.004997488s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 11:32:48.537647   14046 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-093588 cluster.
	I1202 11:32:48.538975   14046 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 11:32:48.540091   14046 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 11:32:48.541177   14046 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, nvidia-device-plugin, default-storageclass, inspektor-gadget, metrics-server, amd-gpu-device-plugin, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1202 11:32:48.542184   14046 addons.go:510] duration metric: took 1m24.126505676s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner nvidia-device-plugin default-storageclass inspektor-gadget metrics-server amd-gpu-device-plugin yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1202 11:32:48.542232   14046 start.go:246] waiting for cluster config update ...
	I1202 11:32:48.542256   14046 start.go:255] writing updated cluster config ...
	I1202 11:32:48.542565   14046 ssh_runner.go:195] Run: rm -f paused
	I1202 11:32:48.592664   14046 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 11:32:48.594409   14046 out.go:177] * Done! kubectl is now configured to use "addons-093588" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.546645088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139376546619807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3be55fb7-14d1-42f4-8e80-2a101d61c7e1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.547134748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20b53e1c-ffb8-4d26-b388-2abd52ddfd02 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.547272430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20b53e1c-ffb8-4d26-b388-2abd52ddfd02 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.547655364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733139376384172980,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ac1b9f95162ba75981741aaa49f12158cf043a8fdf9d4744bcf8968c12e5c9,PodSandboxId:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733139238782351328,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e1825b9c515e3ce7597d470b44e8214bc28c9ebaec69cfa21450036896bbd,PodSandboxId:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733139172191173333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-8
3f9-2119471a0df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3decc9cb607d18fa1d54ce547fba6341d15db31ca406cbd9e3b67c7274100e4,PodSandboxId:4dd378cbb1fe84c8a415b23a3fa25fd73a272f3a269862b2ce85b9144c6d0c04,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733139164704900997,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-jl9qn,io.kubernetes.pod.namespace: ingress-nginx,i
o.kubernetes.pod.uid: 4a36ffd2-b76d-4ad2-bf9a-cbd21cdc413d,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1c9894577d1715169976fac54e5e92fc068f4f1d8e28d8e59c638e2c000387fa,PodSandboxId:8692758ceeb9f604124de345e5be36a361c70a6a1e43061b1528f416cab23b16,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526f
f8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733139147156540134,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s2pxw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4ae830b-5959-4890-ba55-97c4e9066abc,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a0c87a19737894551d5457b50e806a71136166c7c35634861f55dca03207a3,PodSandboxId:5bb9c45ae596373f4daceb762f50465fa5db581af1f3941f89861fac201463ef,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cb
fbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733139147066341976,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7l67n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db5f3353-9ef7-4541-841d-b6d35db7f932,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3350f86d51adaa1294f38432a96d12f58dad3c88cb1b63f53d129a72f079c5a3,PodSandboxId:4f8cd12020a861322b02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-
server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733139133148376734,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd688bac9204be3d8b36fc16aa1eee1297e33d7bd568e04857088c350e23ddd2,PodSandboxId:727a2ad10b461920698fe35b169776c
ffd8807d863618b4787992c500f52f387,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733139125687482505,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ee4c3f1d373dc3b1a44810905158446a9b776b
9f7557b488e4222707c7dafb,PodSandboxId:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733139123415093119,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
efc6f97dd502796421c7ace089a6a9f104b7940b859b4ddfda4f5c8b56f5da02,PodSandboxId:65d00dd604b777559653c55a6466bb79d7d85b16d8ff30bb6fdbf659da3855f4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733139101270566536,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d2e4da-4868-4b1e-9718-bcc404d49f31,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328,PodSandboxId:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733139092690590220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6,PodSandboxId:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733139088826417039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7,PodSandboxId:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139084116954134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630,PodSandboxId:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139073515107013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,},Annotations:map[string]string{io.kubernetes.container.ha
sh: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55,PodSandboxId:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139073495156905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c,PodSandboxId:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139073507420703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96,PodSandboxId:94204ef648dac42b0379640042a7c974af9203d300edda9454e6243defccdd64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139073500988739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20b53e1c-ffb8-4d26-b388-2abd52ddfd02 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.562538699Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=0f7c457e-7579-4a90-8d35-94e904b03c15 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.562931824Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-bq9jt,Uid:6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139375394317316,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:36:15.077147569Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&PodSandboxMetadata{Name:nginx,Uid:9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1733139236134573190,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:33:55.813784593Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&PodSandboxMetadata{Name:busybox,Uid:9f6e4744-0d79-497c-83f9-2119471a0df3,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139169488004241,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-83f9-2119471a0df3,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:32:49.179630307Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4dd378cbb1fe84c8a4
15b23a3fa25fd73a272f3a269862b2ce85b9144c6d0c04,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-5f85ff4588-jl9qn,Uid:4a36ffd2-b76d-4ad2-bf9a-cbd21cdc413d,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139157786858360,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-jl9qn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a36ffd2-b76d-4ad2-bf9a-cbd21cdc413d,pod-template-hash: 5f85ff4588,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:33.568934434Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8692758ceeb9f604124de345e5be36a361c70a6a1e43061b1528f416cab23b16,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-s2pxw,Uid:c4ae830b-5959-4890-ba55-97c4e9066abc,Namespace:ingress-nginx,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1733139095466543770,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: b805dbc6-2b5f-4e42-adf4-40c1eff8ead2,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: b805dbc6-2b5f-4e42-adf4-40c1eff8ead2,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-s2pxw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4ae830b-5959-4890-ba55-97c4e9066abc,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:33.753971573Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5bb9c45ae596373f4daceb762f50465fa5db581af1f3941f89861fac201463ef,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-7l67n,Uid:db5f3353-9ef7-4541-841d-b6d35db7f932,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,C
reatedAt:1733139095432878077,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: b70b0154-91d5-4b9d-80d0-955dc822ff8f,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: b70b0154-91d5-4b9d-80d0-955dc822ff8f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-7l67n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db5f3353-9ef7-4541-841d-b6d35db7f932,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:33.692954067Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f8cd12020a861322b02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-z5r8x,Uid:b4ffaa02-f311-4afa-9113-ac7a8b7b5828,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139090850154614,Lab
els:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:30.233577345Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:727a2ad10b461920698fe35b169776cffd8807d863618b4787992c500f52f387,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-86d989889c-6bbl8,Uid:c2094412-6704-4c4f-8bc7-c21561ad7372,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139089909492415,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,pod-template-hash: 86d989889c,},Annotations:map[string]string{kuberne
tes.io/config.seen: 2024-12-02T11:31:29.114140663Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:90465e3b-c05f-4fff-a0f6-c6a8b7703e89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139089310304757,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storag
e-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-02T11:31:28.880933004Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:65d00dd604b777559653c55a6466bb79d7d85b16d8ff30bb6fdbf659da3855f4,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:93d2e4da-4868-4b1e-9718-bcc404d49f31,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139087638795064,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d2e4da-4868-4b1e-9718-bcc404d49f31,},Ann
otations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-12-02T11:31:27.317731601Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&PodSandboxMeta
data{Name:amd-gpu-device-plugin-9x4xz,Uid:55df6bd8-36c5-4864-8918-ac9425f2f9cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139087295738499,Labels:map[string]string{controller-revision-hash: 59cf7d9b45,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:26.978148659Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-sh425,Uid:749fc6c5-7fb8-4660-876f-15b8c46c2e50,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139085427682825,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:24.221453920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&PodSandboxMetadata{Name:kube-proxy-8bqbx,Uid:f637fa3b-3c50-489d-b864-5477922486f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139084010336031,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:23.103682635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94204ef648dac42b0379640042a7c974af9203d300ed
da9454e6243defccdd64,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-093588,Uid:fb05324ef0da57c6be9879c98c60ce72,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139073318258743,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fb05324ef0da57c6be9879c98c60ce72,kubernetes.io/config.seen: 2024-12-02T11:31:12.647806322Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&PodSandboxMetadata{Name:etcd-addons-093588,Uid:c463271d0012074285091ad6a9bb5269,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139073315298857,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.203:2379,kubernetes.io/config.hash: c463271d0012074285091ad6a9bb5269,kubernetes.io/config.seen: 2024-12-02T11:31:12.647808573Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-093588,Uid:5a54bf73c0b779fcefc9f9ad61889351,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139073311589878,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserve
r.advertise-address.endpoint: 192.168.39.203:8443,kubernetes.io/config.hash: 5a54bf73c0b779fcefc9f9ad61889351,kubernetes.io/config.seen: 2024-12-02T11:31:12.647798299Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-093588,Uid:2bc34c7aba0bd63feec10df99ed16d0b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139073309670964,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2bc34c7aba0bd63feec10df99ed16d0b,kubernetes.io/config.seen: 2024-12-02T11:31:12.647807592Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0f7c457e-7579-4a90-8d35-94e904
b03c15 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.564019671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ffe7253-5388-4a4d-995c-614e76e96641 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.564091697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ffe7253-5388-4a4d-995c-614e76e96641 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.565341730Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733139376384172980,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ac1b9f95162ba75981741aaa49f12158cf043a8fdf9d4744bcf8968c12e5c9,PodSandboxId:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733139238782351328,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e1825b9c515e3ce7597d470b44e8214bc28c9ebaec69cfa21450036896bbd,PodSandboxId:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733139172191173333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-8
3f9-2119471a0df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3decc9cb607d18fa1d54ce547fba6341d15db31ca406cbd9e3b67c7274100e4,PodSandboxId:4dd378cbb1fe84c8a415b23a3fa25fd73a272f3a269862b2ce85b9144c6d0c04,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733139164704900997,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-jl9qn,io.kubernetes.pod.namespace: ingress-nginx,i
o.kubernetes.pod.uid: 4a36ffd2-b76d-4ad2-bf9a-cbd21cdc413d,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1c9894577d1715169976fac54e5e92fc068f4f1d8e28d8e59c638e2c000387fa,PodSandboxId:8692758ceeb9f604124de345e5be36a361c70a6a1e43061b1528f416cab23b16,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526f
f8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733139147156540134,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s2pxw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4ae830b-5959-4890-ba55-97c4e9066abc,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a0c87a19737894551d5457b50e806a71136166c7c35634861f55dca03207a3,PodSandboxId:5bb9c45ae596373f4daceb762f50465fa5db581af1f3941f89861fac201463ef,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cb
fbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733139147066341976,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7l67n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db5f3353-9ef7-4541-841d-b6d35db7f932,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3350f86d51adaa1294f38432a96d12f58dad3c88cb1b63f53d129a72f079c5a3,PodSandboxId:4f8cd12020a861322b02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-
server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733139133148376734,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd688bac9204be3d8b36fc16aa1eee1297e33d7bd568e04857088c350e23ddd2,PodSandboxId:727a2ad10b461920698fe35b169776c
ffd8807d863618b4787992c500f52f387,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733139125687482505,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ee4c3f1d373dc3b1a44810905158446a9b776b
9f7557b488e4222707c7dafb,PodSandboxId:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733139123415093119,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
efc6f97dd502796421c7ace089a6a9f104b7940b859b4ddfda4f5c8b56f5da02,PodSandboxId:65d00dd604b777559653c55a6466bb79d7d85b16d8ff30bb6fdbf659da3855f4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733139101270566536,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d2e4da-4868-4b1e-9718-bcc404d49f31,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328,PodSandboxId:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733139092690590220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6,PodSandboxId:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733139088826417039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7,PodSandboxId:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139084116954134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630,PodSandboxId:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139073515107013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,},Annotations:map[string]string{io.kubernetes.container.ha
sh: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55,PodSandboxId:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139073495156905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c,PodSandboxId:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139073507420703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96,PodSandboxId:94204ef648dac42b0379640042a7c974af9203d300edda9454e6243defccdd64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139073500988739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ffe7253-5388-4a4d-995c-614e76e96641 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.566581722Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},},}" file="otel-collector/interceptors.go:62" id=335f567d-04f6-41e9-ad27-fce7729ca926 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.566835427Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-bq9jt,Uid:6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139375394317316,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:36:15.077147569Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=335f567d-04f6-41e9-ad27-fce7729ca926 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.567235277Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Verbose:false,}" file="otel-collector/interceptors.go:62" id=69571f5b-a998-4682-8ffb-280932377acc name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.567342973Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-bq9jt,Uid:6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139375394317316,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:36:15.077147569Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=69571f5b-a998-4682-8ffb-280932377acc name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.567774926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},},}" file="otel-collector/interceptors.go:62" id=22865bb0-5039-44a3-bc98-590e7bb96d48 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.567896290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22865bb0-5039-44a3-bc98-590e7bb96d48 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.567954142Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733139376384172980,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22865bb0-5039-44a3-bc98-590e7bb96d48 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.568321662Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7a911f85-bdec-4f21-8ac0-155e679d1454 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.568434373Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1733139376454965403,StartedAt:1733139376496518098,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kicbase/echo-server:1.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49/containers/hello-world-app/f866a940,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49/volumes/kubernetes.io~projected/kube-api-access-5sw9v,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/
var/log/pods/default_hello-world-app-55bf9c44b4-bq9jt_6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49/hello-world-app/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7a911f85-bdec-4f21-8ac0-155e679d1454 name=/runtime.v1.RuntimeService/ContainerStatus
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.617913861Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47fc6f50-92d0-47dc-b333-7493e17ffc76 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.617999526Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47fc6f50-92d0-47dc-b333-7493e17ffc76 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.620857813Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e37ce77b-bb6b-4dcf-82e5-ac930c1a0c6b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.622085992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139376622059403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e37ce77b-bb6b-4dcf-82e5-ac930c1a0c6b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.623380090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2ee9904-4e97-4d1c-902f-0cf42f9bddfb name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.623447426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2ee9904-4e97-4d1c-902f-0cf42f9bddfb name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.624293948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733139376384172980,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ac1b9f95162ba75981741aaa49f12158cf043a8fdf9d4744bcf8968c12e5c9,PodSandboxId:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733139238782351328,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e1825b9c515e3ce7597d470b44e8214bc28c9ebaec69cfa21450036896bbd,PodSandboxId:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733139172191173333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-8
3f9-2119471a0df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3decc9cb607d18fa1d54ce547fba6341d15db31ca406cbd9e3b67c7274100e4,PodSandboxId:4dd378cbb1fe84c8a415b23a3fa25fd73a272f3a269862b2ce85b9144c6d0c04,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733139164704900997,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-jl9qn,io.kubernetes.pod.namespace: ingress-nginx,i
o.kubernetes.pod.uid: 4a36ffd2-b76d-4ad2-bf9a-cbd21cdc413d,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1c9894577d1715169976fac54e5e92fc068f4f1d8e28d8e59c638e2c000387fa,PodSandboxId:8692758ceeb9f604124de345e5be36a361c70a6a1e43061b1528f416cab23b16,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526f
f8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733139147156540134,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s2pxw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4ae830b-5959-4890-ba55-97c4e9066abc,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a0c87a19737894551d5457b50e806a71136166c7c35634861f55dca03207a3,PodSandboxId:5bb9c45ae596373f4daceb762f50465fa5db581af1f3941f89861fac201463ef,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cb
fbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733139147066341976,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7l67n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db5f3353-9ef7-4541-841d-b6d35db7f932,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3350f86d51adaa1294f38432a96d12f58dad3c88cb1b63f53d129a72f079c5a3,PodSandboxId:4f8cd12020a861322b02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-
server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733139133148376734,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd688bac9204be3d8b36fc16aa1eee1297e33d7bd568e04857088c350e23ddd2,PodSandboxId:727a2ad10b461920698fe35b169776c
ffd8807d863618b4787992c500f52f387,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733139125687482505,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ee4c3f1d373dc3b1a44810905158446a9b776b
9f7557b488e4222707c7dafb,PodSandboxId:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733139123415093119,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
efc6f97dd502796421c7ace089a6a9f104b7940b859b4ddfda4f5c8b56f5da02,PodSandboxId:65d00dd604b777559653c55a6466bb79d7d85b16d8ff30bb6fdbf659da3855f4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733139101270566536,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d2e4da-4868-4b1e-9718-bcc404d49f31,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328,PodSandboxId:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733139092690590220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6,PodSandboxId:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733139088826417039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7,PodSandboxId:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139084116954134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630,PodSandboxId:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139073515107013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,},Annotations:map[string]string{io.kubernetes.container.ha
sh: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55,PodSandboxId:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139073495156905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c,PodSandboxId:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139073507420703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96,PodSandboxId:94204ef648dac42b0379640042a7c974af9203d300edda9454e6243defccdd64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139073500988739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2ee9904-4e97-4d1c-902f-0cf42f9bddfb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	84b181ee3e257       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   06d534d8ecc02       hello-world-app-55bf9c44b4-bq9jt
	27ac1b9f95162       docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303                              2 minutes ago            Running             nginx                     0                   a4e1abefd1098       nginx
	5c6e1825b9c51       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   c5e32f031e4c7       busybox
	a3decc9cb607d       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago            Running             controller                0                   4dd378cbb1fe8       ingress-nginx-controller-5f85ff4588-jl9qn
	1c9894577d171       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago            Exited              patch                     0                   8692758ceeb9f       ingress-nginx-admission-patch-s2pxw
	29a0c87a19737       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago            Exited              create                    0                   5bb9c45ae5963       ingress-nginx-admission-create-7l67n
	3350f86d51ada       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago            Running             metrics-server            0                   4f8cd12020a86       metrics-server-84c5f94fbc-z5r8x
	bd688bac9204b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago            Running             local-path-provisioner    0                   727a2ad10b461       local-path-provisioner-86d989889c-6bbl8
	53ee4c3f1d373       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago            Running             amd-gpu-device-plugin     0                   20efed53273ca       amd-gpu-device-plugin-9x4xz
	efc6f97dd5027       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago            Running             minikube-ingress-dns      0                   65d00dd604b77       kube-ingress-dns-minikube
	777ee197b7d2c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   dadb7aad77d41       storage-provisioner
	2415b4c333fed       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago            Running             coredns                   0                   1140032f7ee0a       coredns-7c65d6cfc9-sh425
	28fe66023dde9       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             4 minutes ago            Running             kube-proxy                0                   db3aa60a35b6c       kube-proxy-8bqbx
	5256bb6e86f1e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago            Running             etcd                      0                   e4ff56ebcc0a5       etcd-addons-093588
	3e083dadde5b1       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago            Running             kube-apiserver            0                   e2d72d2c0f73b       kube-apiserver-addons-093588
	6bf4cf0d44bb8       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago            Running             kube-controller-manager   0                   94204ef648dac       kube-controller-manager-addons-093588
	9c587d7cc1d10       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago            Running             kube-scheduler            0                   7ecb4d3d09f04       kube-scheduler-addons-093588
	
	
	==> coredns [2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6] <==
	[INFO] 10.244.0.23:53682 - 5608 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00013055s
	[INFO] 10.244.0.23:41205 - 33515 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131468s
	[INFO] 10.244.0.23:49880 - 4478 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119977s
	[INFO] 10.244.0.23:36694 - 64123 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000193882s
	[INFO] 10.244.0.23:42791 - 63566 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000172378s
	[INFO] 10.244.0.23:55478 - 16821 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001265351s
	[INFO] 10.244.0.23:49121 - 51480 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001441104s
	[INFO] 10.244.0.29:47894 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000412828s
	[INFO] 10.244.0.29:53582 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130294s
	[INFO] 10.244.0.7:37205 - 20278 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00048156s
	[INFO] 10.244.0.7:37205 - 18843 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000824224s
	[INFO] 10.244.0.7:37205 - 37572 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000106395s
	[INFO] 10.244.0.7:37205 - 42345 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000222869s
	[INFO] 10.244.0.7:37205 - 6623 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000146608s
	[INFO] 10.244.0.7:37205 - 27288 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000093673s
	[INFO] 10.244.0.7:37205 - 47481 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000157232s
	[INFO] 10.244.0.7:37205 - 65471 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000168623s
	[INFO] 10.244.0.7:38019 - 48214 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000090451s
	[INFO] 10.244.0.7:38019 - 47930 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000109788s
	[INFO] 10.244.0.7:42122 - 32670 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107923s
	[INFO] 10.244.0.7:42122 - 32447 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000147476s
	[INFO] 10.244.0.7:57067 - 27166 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000155662s
	[INFO] 10.244.0.7:57067 - 26976 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092139s
	[INFO] 10.244.0.7:49262 - 29665 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000051809s
	[INFO] 10.244.0.7:49262 - 29855 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007448s
	
	
	==> describe nodes <==
	Name:               addons-093588
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-093588
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=addons-093588
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T11_31_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-093588
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:31:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-093588
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:36:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:34:22 +0000   Mon, 02 Dec 2024 11:31:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:34:22 +0000   Mon, 02 Dec 2024 11:31:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:34:22 +0000   Mon, 02 Dec 2024 11:31:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:34:22 +0000   Mon, 02 Dec 2024 11:31:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    addons-093588
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b981ec46e284c639f1b7adc8d382e1a
	  System UUID:                0b981ec4-6e28-4c63-9f1b-7adc8d382e1a
	  Boot ID:                    df4ffb50-8889-4ff6-ab14-5cfc93566331
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  default                     hello-world-app-55bf9c44b4-bq9jt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-jl9qn    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m43s
	  kube-system                 amd-gpu-device-plugin-9x4xz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 coredns-7c65d6cfc9-sh425                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m52s
	  kube-system                 etcd-addons-093588                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m57s
	  kube-system                 kube-apiserver-addons-093588                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-controller-manager-addons-093588        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-proxy-8bqbx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-scheduler-addons-093588                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 metrics-server-84c5f94fbc-z5r8x              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m46s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  local-path-storage          local-path-provisioner-86d989889c-6bbl8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m4s (x8 over 5m4s)  kubelet          Node addons-093588 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s (x8 over 5m4s)  kubelet          Node addons-093588 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s (x7 over 5m4s)  kubelet          Node addons-093588 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m58s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m58s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m57s                kubelet          Node addons-093588 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s                kubelet          Node addons-093588 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s                kubelet          Node addons-093588 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m57s                kubelet          Node addons-093588 status is now: NodeReady
	  Normal  RegisteredNode           4m53s                node-controller  Node addons-093588 event: Registered Node addons-093588 in Controller
	
	
	==> dmesg <==
	[  +6.481301] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.076788] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.295485] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.458763] systemd-fstab-generator[1464]: Ignoring "noauto" option for root device
	[  +4.667325] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.157468] kauditd_printk_skb: 127 callbacks suppressed
	[  +6.980309] kauditd_printk_skb: 100 callbacks suppressed
	[Dec 2 11:32] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.999753] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.944183] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.256346] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.021267] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.144991] kauditd_printk_skb: 49 callbacks suppressed
	[  +6.049857] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.986101] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.155001] kauditd_printk_skb: 14 callbacks suppressed
	[Dec 2 11:33] kauditd_printk_skb: 1 callbacks suppressed
	[ +21.174772] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.028933] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.099758] kauditd_printk_skb: 62 callbacks suppressed
	[ +15.240942] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.362197] kauditd_printk_skb: 9 callbacks suppressed
	[Dec 2 11:34] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.790044] kauditd_printk_skb: 49 callbacks suppressed
	[Dec 2 11:36] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630] <==
	{"level":"info","ts":"2024-12-02T11:32:31.509031Z","caller":"traceutil/trace.go:171","msg":"trace[1734125650] transaction","detail":"{read_only:false; response_revision:1081; number_of_response:1; }","duration":"413.800042ms","start":"2024-12-02T11:32:31.095224Z","end":"2024-12-02T11:32:31.509024Z","steps":["trace[1734125650] 'process raft request'  (duration: 412.556295ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:32:31.509318Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:32:31.095216Z","time spent":"413.887629ms","remote":"127.0.0.1:54862","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3133,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" mod_revision:837 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" > >"}
	{"level":"info","ts":"2024-12-02T11:32:31.509910Z","caller":"traceutil/trace.go:171","msg":"trace[1769726754] linearizableReadLoop","detail":"{readStateIndex:1110; appliedIndex:1108; }","duration":"317.515372ms","start":"2024-12-02T11:32:31.190760Z","end":"2024-12-02T11:32:31.508275Z","steps":["trace[1769726754] 'read index received'  (duration: 316.176707ms)","trace[1769726754] 'applied index is now lower than readState.Index'  (duration: 1.338186ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-02T11:32:31.510039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"319.348778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-02T11:32:31.510230Z","caller":"traceutil/trace.go:171","msg":"trace[1905010634] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1081; }","duration":"319.54177ms","start":"2024-12-02T11:32:31.190681Z","end":"2024-12-02T11:32:31.510223Z","steps":["trace[1905010634] 'agreement among raft nodes before linearized reading'  (duration: 319.33211ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:32:31.510347Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:32:31.190637Z","time spent":"319.701654ms","remote":"127.0.0.1:54810","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-02T11:32:31.510884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.331323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/yakd-dashboard-67d98fc6b-hsrwm\" ","response":"range_response_count:1 size:4581"}
	{"level":"info","ts":"2024-12-02T11:32:31.510999Z","caller":"traceutil/trace.go:171","msg":"trace[1506151935] range","detail":"{range_begin:/registry/pods/yakd-dashboard/yakd-dashboard-67d98fc6b-hsrwm; range_end:; response_count:1; response_revision:1081; }","duration":"290.452941ms","start":"2024-12-02T11:32:31.220538Z","end":"2024-12-02T11:32:31.510991Z","steps":["trace[1506151935] 'agreement among raft nodes before linearized reading'  (duration: 290.098187ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:32:31.511638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.271954ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-02T11:32:31.511777Z","caller":"traceutil/trace.go:171","msg":"trace[41204636] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1081; }","duration":"223.410853ms","start":"2024-12-02T11:32:31.288359Z","end":"2024-12-02T11:32:31.511769Z","steps":["trace[41204636] 'agreement among raft nodes before linearized reading'  (duration: 223.263454ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:32:40.977312Z","caller":"traceutil/trace.go:171","msg":"trace[1810761727] transaction","detail":"{read_only:false; response_revision:1123; number_of_response:1; }","duration":"124.053614ms","start":"2024-12-02T11:32:40.853243Z","end":"2024-12-02T11:32:40.977297Z","steps":["trace[1810761727] 'process raft request'  (duration: 123.593438ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:32:43.349291Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.790942ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3888176061930468756 > lease_revoke:<id:35f59387238af47d>","response":"size:27"}
	{"level":"info","ts":"2024-12-02T11:32:43.349565Z","caller":"traceutil/trace.go:171","msg":"trace[978203883] linearizableReadLoop","detail":"{readStateIndex:1159; appliedIndex:1158; }","duration":"326.46397ms","start":"2024-12-02T11:32:43.023087Z","end":"2024-12-02T11:32:43.349551Z","steps":["trace[978203883] 'read index received'  (duration: 176.281029ms)","trace[978203883] 'applied index is now lower than readState.Index'  (duration: 150.108422ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-02T11:32:43.349677Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.516213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-02T11:32:43.349811Z","caller":"traceutil/trace.go:171","msg":"trace[1556617837] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1127; }","duration":"326.718412ms","start":"2024-12-02T11:32:43.023082Z","end":"2024-12-02T11:32:43.349800Z","steps":["trace[1556617837] 'agreement among raft nodes before linearized reading'  (duration: 326.493012ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:32:43.349891Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:32:43.023040Z","time spent":"326.836999ms","remote":"127.0.0.1:54810","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-12-02T11:33:25.193945Z","caller":"traceutil/trace.go:171","msg":"trace[1208210779] linearizableReadLoop","detail":"{readStateIndex:1400; appliedIndex:1399; }","duration":"208.352199ms","start":"2024-12-02T11:33:24.985579Z","end":"2024-12-02T11:33:25.193931Z","steps":["trace[1208210779] 'read index received'  (duration: 208.251949ms)","trace[1208210779] 'applied index is now lower than readState.Index'  (duration: 99.656µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-02T11:33:25.194034Z","caller":"traceutil/trace.go:171","msg":"trace[585938922] transaction","detail":"{read_only:false; response_revision:1356; number_of_response:1; }","duration":"373.401906ms","start":"2024-12-02T11:33:24.820626Z","end":"2024-12-02T11:33:25.194028Z","steps":["trace[585938922] 'process raft request'  (duration: 373.198624ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:33:25.194112Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:33:24.820609Z","time spent":"373.443109ms","remote":"127.0.0.1:54810","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4248,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1\" mod_revision:1354 > success:<request_put:<key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1\" value_size:4148 >> failure:<request_range:<key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1\" > >"}
	{"level":"warn","ts":"2024-12-02T11:33:25.194354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.766538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/registry-66c9cd494c-4dmpv.180d58d19a377856\" ","response":"range_response_count:1 size:826"}
	{"level":"info","ts":"2024-12-02T11:33:25.194403Z","caller":"traceutil/trace.go:171","msg":"trace[1712045514] range","detail":"{range_begin:/registry/events/kube-system/registry-66c9cd494c-4dmpv.180d58d19a377856; range_end:; response_count:1; response_revision:1356; }","duration":"208.820614ms","start":"2024-12-02T11:33:24.985574Z","end":"2024-12-02T11:33:25.194395Z","steps":["trace[1712045514] 'agreement among raft nodes before linearized reading'  (duration: 208.618556ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:33:25.194540Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.491693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1\" ","response":"range_response_count:1 size:4263"}
	{"level":"info","ts":"2024-12-02T11:33:25.194787Z","caller":"traceutil/trace.go:171","msg":"trace[1843945180] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1; range_end:; response_count:1; response_revision:1356; }","duration":"187.741861ms","start":"2024-12-02T11:33:25.007033Z","end":"2024-12-02T11:33:25.194775Z","steps":["trace[1843945180] 'agreement among raft nodes before linearized reading'  (duration: 187.326707ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:33:46.232089Z","caller":"traceutil/trace.go:171","msg":"trace[564767909] transaction","detail":"{read_only:false; response_revision:1529; number_of_response:1; }","duration":"323.809998ms","start":"2024-12-02T11:33:45.908266Z","end":"2024-12-02T11:33:46.232076Z","steps":["trace[564767909] 'process raft request'  (duration: 323.483126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:33:46.232222Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:33:45.908251Z","time spent":"323.913932ms","remote":"127.0.0.1:54798","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1524 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 11:36:16 up 5 min,  0 users,  load average: 0.31, 1.00, 0.55
	Linux addons-093588 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1202 11:33:18.047549       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.92.16:443: connect: connection refused" logger="UnhandledError"
	E1202 11:33:18.054141       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.92.16:443: connect: connection refused" logger="UnhandledError"
	E1202 11:33:18.078468       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.92.16:443: connect: connection refused" logger="UnhandledError"
	I1202 11:33:18.164928       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1202 11:33:20.557130       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.110.149"}
	I1202 11:33:50.188346       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1202 11:33:51.225296       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1202 11:33:53.732475       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1202 11:33:55.653397       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1202 11:33:55.870562       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.159.13"}
	I1202 11:34:08.733165       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:08.733224       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:08.777393       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:08.778616       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:08.789293       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:08.789434       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:08.791466       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:08.791539       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:08.934431       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	W1202 11:34:09.792203       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1202 11:34:09.937879       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1202 11:34:09.937904       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1202 11:36:15.262680       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.26.244"}
	
	
	==> kube-controller-manager [6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96] <==
	E1202 11:34:42.751767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:34:46.163018       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:34:46.163054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:34:55.790553       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:34:55.790616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:16.266561       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:16.266596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:17.101236       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:17.101354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:18.787290       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:18.787391       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:39.633090       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:39.633243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:47.738683       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:47.738802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:58.590855       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:58.590928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:58.607597       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:58.607631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1202 11:36:15.082221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.008333ms"
	I1202 11:36:15.092214       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.839443ms"
	I1202 11:36:15.092511       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="82.32µs"
	I1202 11:36:15.106747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="74.48µs"
	I1202 11:36:16.591405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.378277ms"
	I1202 11:36:16.592234       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.676µs"
	
	
	==> kube-proxy [28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1202 11:31:24.341304       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1202 11:31:24.349628       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E1202 11:31:24.349847       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 11:31:24.516756       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1202 11:31:24.516794       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 11:31:24.516840       1 server_linux.go:169] "Using iptables Proxier"
	I1202 11:31:24.521066       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 11:31:24.521669       1 server.go:483] "Version info" version="v1.31.2"
	I1202 11:31:24.521810       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 11:31:24.544631       1 config.go:199] "Starting service config controller"
	I1202 11:31:24.544656       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 11:31:24.544760       1 config.go:105] "Starting endpoint slice config controller"
	I1202 11:31:24.544767       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 11:31:24.546593       1 config.go:328] "Starting node config controller"
	I1202 11:31:24.555031       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 11:31:24.644874       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 11:31:24.644939       1 shared_informer.go:320] Caches are synced for service config
	I1202 11:31:24.659340       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55] <==
	W1202 11:31:16.359632       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:16.359683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:16.359821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1202 11:31:16.359854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:16.359953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 11:31:16.359994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.170036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 11:31:17.170071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.175313       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1202 11:31:17.175385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.343360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1202 11:31:17.343518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.423570       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:17.424077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.477506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 11:31:17.477537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.558906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:17.559078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.571603       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1202 11:31:17.571679       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1202 11:31:17.575523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:17.576010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.576175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1202 11:31:17.576213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1202 11:31:19.948258       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078398    1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="liveness-probe"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078481    1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2473044-c394-4b78-8583-763661c9c329" containerName="registry-proxy"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078515    1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b9feacd-f2e4-41f7-abc9-06e472d66f0b" containerName="volume-snapshot-controller"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078547    1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea0e750d-7300-4238-9443-627b04eb650d" containerName="volume-snapshot-controller"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078578    1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eacac2d8-005d-4f85-aa5f-5ee6725473a4" containerName="csi-resizer"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078624    1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="node-driver-registrar"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078664    1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="csi-external-health-monitor-controller"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078754    1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="179b2fd0-56c9-4e0e-8288-e66d73594712" containerName="task-pv-container"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078787    1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="csi-provisioner"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078825    1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ba754ca-3bc4-4639-bbf2-9d771c422d1f" containerName="registry"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.078923    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="csi-external-health-monitor-controller"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.078956    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="csi-provisioner"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.078988    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea0e750d-7300-4238-9443-627b04eb650d" containerName="volume-snapshot-controller"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079024    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="hostpath"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079058    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2473044-c394-4b78-8583-763661c9c329" containerName="registry-proxy"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079097    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="eacac2d8-005d-4f85-aa5f-5ee6725473a4" containerName="csi-resizer"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079127    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="liveness-probe"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079157    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="9090d43f-db00-4d9f-a761-7e784e7d66e9" containerName="csi-attacher"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079186    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="179b2fd0-56c9-4e0e-8288-e66d73594712" containerName="task-pv-container"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079217    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9feacd-f2e4-41f7-abc9-06e472d66f0b" containerName="volume-snapshot-controller"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079246    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="node-driver-registrar"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079281    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="csi-snapshotter"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079310    1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba754ca-3bc4-4639-bbf2-9d771c422d1f" containerName="registry"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.111291    1213 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sw9v\" (UniqueName: \"kubernetes.io/projected/6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49-kube-api-access-5sw9v\") pod \"hello-world-app-55bf9c44b4-bq9jt\" (UID: \"6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49\") " pod="default/hello-world-app-55bf9c44b4-bq9jt"
	Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.980246    1213 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-9x4xz" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328] <==
	I1202 11:31:33.446653       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 11:31:33.475001       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 11:31:33.475232       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 11:31:33.571674       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 11:31:33.584238       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-093588_db6d9f13-b66b-4ee3-98aa-9e1906833c9b!
	I1202 11:31:33.585297       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a34dd670-d034-4d97-b122-ad1727e6d2ec", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-093588_db6d9f13-b66b-4ee3-98aa-9e1906833c9b became leader
	I1202 11:31:33.684810       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-093588_db6d9f13-b66b-4ee3-98aa-9e1906833c9b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-093588 -n addons-093588
helpers_test.go:261: (dbg) Run:  kubectl --context addons-093588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-7l67n ingress-nginx-admission-patch-s2pxw
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-093588 describe pod ingress-nginx-admission-create-7l67n ingress-nginx-admission-patch-s2pxw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-093588 describe pod ingress-nginx-admission-create-7l67n ingress-nginx-admission-patch-s2pxw: exit status 1 (65.427052ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7l67n" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-s2pxw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-093588 describe pod ingress-nginx-admission-create-7l67n ingress-nginx-admission-patch-s2pxw: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-093588 addons disable ingress --alsologtostderr -v=1: (7.695479411s)
--- FAIL: TestAddons/parallel/Ingress (150.96s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (328.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.546843ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-z5r8x" [b4ffaa02-f311-4afa-9113-ac7a8b7b5828] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004807777s
addons_test.go:402: (dbg) Run:  kubectl --context addons-093588 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-093588 top pods -n kube-system: exit status 1 (63.189305ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9x4xz, age: 2m9.741009186s

                                                
                                                
** /stderr **
I1202 11:33:35.743064   13416 retry.go:31] will retry after 2.036288503s: exit status 1
2024/12/02 11:33:36 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:33:36 [DEBUG] GET http://192.168.39.203:5000: retrying in 2s (3 left)
addons_test.go:402: (dbg) Run:  kubectl --context addons-093588 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-093588 top pods -n kube-system: exit status 1 (71.328697ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9x4xz, age: 2m11.849351125s

                                                
                                                
** /stderr **
I1202 11:33:37.851244   13416 retry.go:31] will retry after 4.200245065s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-093588 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-093588 top pods -n kube-system: exit status 1 (68.727331ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9x4xz, age: 2m16.118720835s

                                                
                                                
** /stderr **
I1202 11:33:42.120452   13416 retry.go:31] will retry after 5.887113662s: exit status 1
2024/12/02 11:33:42 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:33:42 [DEBUG] GET http://192.168.39.203:5000: retrying in 8s (1 left)
addons_test.go:402: (dbg) Run:  kubectl --context addons-093588 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-093588 top pods -n kube-system: exit status 1 (67.764347ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9x4xz, age: 2m22.074079386s

                                                
                                                
** /stderr **
I1202 11:33:48.075645   13416 retry.go:31] will retry after 6.054138856s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-093588 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-093588 top pods -n kube-system: exit status 1 (67.650778ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9x4xz, age: 2m28.196022399s

                                                
                                                
** /stderr **
I1202 11:33:54.197780   13416 retry.go:31] will retry after 11.307488991s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-093588 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-093588 top pods -n kube-system: exit status 1 (68.426352ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9x4xz, age: 2m39.572832965s

                                                
                                                
** /stderr **
I1202 11:34:05.574582   13416 retry.go:31] will retry after 20.26174035s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-093588 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-093588 top pods -n kube-system: exit status 1 (60.669164ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9x4xz, age: 2m59.897647655s

                                                
                                                
** /stderr **
I1202 11:34:25.899426   13416 retry.go:31] will retry after 23.976957195s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-093588 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-093588 top pods -n kube-system: exit status 1 (61.55236ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9x4xz, age: 3m23.936541748s

                                                
                                                
** /stderr **
I1202 11:34:49.938209   13416 retry.go:31] will retry after 46.09642529s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-093588 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-093588 top pods -n kube-system: exit status 1 (60.450934ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9x4xz, age: 4m10.093974986s

                                                
                                                
** /stderr **
I1202 11:35:36.095826   13416 retry.go:31] will retry after 1m16.194276554s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-093588 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-093588 top pods -n kube-system: exit status 1 (61.067481ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9x4xz, age: 5m26.349745788s

                                                
                                                
** /stderr **
I1202 11:36:52.351533   13416 retry.go:31] will retry after 38.624674627s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-093588 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-093588 top pods -n kube-system: exit status 1 (59.704462ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9x4xz, age: 6m5.035795716s

                                                
                                                
** /stderr **
I1202 11:37:31.037788   13416 retry.go:31] will retry after 1m25.423167686s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-093588 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-093588 top pods -n kube-system: exit status 1 (59.858264ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-9x4xz, age: 7m30.519323823s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-093588 -n addons-093588
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-093588 logs -n 25: (1.164214305s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-407914                                                                     | download-only-407914 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| delete  | -p download-only-257770                                                                     | download-only-257770 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-408241 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | binary-mirror-408241                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43999                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-408241                                                                     | binary-mirror-408241 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| addons  | disable dashboard -p                                                                        | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | addons-093588                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | addons-093588                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-093588 --wait=true                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:32 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:32 UTC | 02 Dec 24 11:32 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:32 UTC | 02 Dec 24 11:33 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | -p addons-093588                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-093588 addons                                                                        | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-093588 ssh cat                                                                       | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | /opt/local-path-provisioner/pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-093588 ip                                                                            | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	| addons  | addons-093588 addons                                                                        | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-093588 addons                                                                        | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-093588 ssh curl -s                                                                   | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-093588 addons                                                                        | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-093588 addons                                                                        | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-093588 ip                                                                            | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:36 UTC | 02 Dec 24 11:36 UTC |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:36 UTC | 02 Dec 24 11:36 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-093588 addons disable                                                                | addons-093588        | jenkins | v1.34.0 | 02 Dec 24 11:36 UTC | 02 Dec 24 11:36 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:30:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:30:37.455381   14046 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:30:37.455480   14046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:37.455489   14046 out.go:358] Setting ErrFile to fd 2...
	I1202 11:30:37.455493   14046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:37.455668   14046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:30:37.456323   14046 out.go:352] Setting JSON to false
	I1202 11:30:37.457128   14046 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":789,"bootTime":1733138248,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:30:37.457182   14046 start.go:139] virtualization: kvm guest
	I1202 11:30:37.459050   14046 out.go:177] * [addons-093588] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:30:37.460220   14046 notify.go:220] Checking for updates...
	I1202 11:30:37.460254   14046 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:30:37.461315   14046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:30:37.462351   14046 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:30:37.463400   14046 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:30:37.464380   14046 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:30:37.465325   14046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:30:37.466424   14046 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:30:37.495915   14046 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 11:30:37.497029   14046 start.go:297] selected driver: kvm2
	I1202 11:30:37.497047   14046 start.go:901] validating driver "kvm2" against <nil>
	I1202 11:30:37.497060   14046 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:30:37.497712   14046 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:30:37.497776   14046 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 11:30:37.512199   14046 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 11:30:37.512258   14046 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 11:30:37.512498   14046 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:30:37.512526   14046 cni.go:84] Creating CNI manager for ""
	I1202 11:30:37.512569   14046 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 11:30:37.512581   14046 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1202 11:30:37.512629   14046 start.go:340] cluster config:
	{Name:addons-093588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:30:37.512716   14046 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:30:37.515047   14046 out.go:177] * Starting "addons-093588" primary control-plane node in "addons-093588" cluster
	I1202 11:30:37.516087   14046 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:30:37.516117   14046 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:30:37.516127   14046 cache.go:56] Caching tarball of preloaded images
	I1202 11:30:37.516196   14046 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:30:37.516208   14046 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:30:37.516518   14046 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/config.json ...
	I1202 11:30:37.516542   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/config.json: {Name:mk15de776ac6faf6fd8a23110b6fb90c273126c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:30:37.516686   14046 start.go:360] acquireMachinesLock for addons-093588: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:30:37.516736   14046 start.go:364] duration metric: took 35.877µs to acquireMachinesLock for "addons-093588"
	I1202 11:30:37.516755   14046 start.go:93] Provisioning new machine with config: &{Name:addons-093588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:30:37.516809   14046 start.go:125] createHost starting for "" (driver="kvm2")
	I1202 11:30:37.518955   14046 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1202 11:30:37.519064   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:30:37.519111   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:30:37.532176   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I1202 11:30:37.532631   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:30:37.533117   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:30:37.533134   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:30:37.533432   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:30:37.533598   14046 main.go:141] libmachine: (addons-093588) Calling .GetMachineName
	I1202 11:30:37.533741   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:30:37.533872   14046 start.go:159] libmachine.API.Create for "addons-093588" (driver="kvm2")
	I1202 11:30:37.533900   14046 client.go:168] LocalClient.Create starting
	I1202 11:30:37.533936   14046 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:30:37.890362   14046 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:30:38.028981   14046 main.go:141] libmachine: Running pre-create checks...
	I1202 11:30:38.029000   14046 main.go:141] libmachine: (addons-093588) Calling .PreCreateCheck
	I1202 11:30:38.029460   14046 main.go:141] libmachine: (addons-093588) Calling .GetConfigRaw
	I1202 11:30:38.029866   14046 main.go:141] libmachine: Creating machine...
	I1202 11:30:38.029880   14046 main.go:141] libmachine: (addons-093588) Calling .Create
	I1202 11:30:38.030036   14046 main.go:141] libmachine: (addons-093588) Creating KVM machine...
	I1202 11:30:38.031150   14046 main.go:141] libmachine: (addons-093588) DBG | found existing default KVM network
	I1202 11:30:38.031811   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.031684   14068 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002011f0}
	I1202 11:30:38.031852   14046 main.go:141] libmachine: (addons-093588) DBG | created network xml: 
	I1202 11:30:38.031872   14046 main.go:141] libmachine: (addons-093588) DBG | <network>
	I1202 11:30:38.031885   14046 main.go:141] libmachine: (addons-093588) DBG |   <name>mk-addons-093588</name>
	I1202 11:30:38.031900   14046 main.go:141] libmachine: (addons-093588) DBG |   <dns enable='no'/>
	I1202 11:30:38.031929   14046 main.go:141] libmachine: (addons-093588) DBG |   
	I1202 11:30:38.031958   14046 main.go:141] libmachine: (addons-093588) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1202 11:30:38.031973   14046 main.go:141] libmachine: (addons-093588) DBG |     <dhcp>
	I1202 11:30:38.031985   14046 main.go:141] libmachine: (addons-093588) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1202 11:30:38.031991   14046 main.go:141] libmachine: (addons-093588) DBG |     </dhcp>
	I1202 11:30:38.031998   14046 main.go:141] libmachine: (addons-093588) DBG |   </ip>
	I1202 11:30:38.032003   14046 main.go:141] libmachine: (addons-093588) DBG |   
	I1202 11:30:38.032010   14046 main.go:141] libmachine: (addons-093588) DBG | </network>
	I1202 11:30:38.032020   14046 main.go:141] libmachine: (addons-093588) DBG | 
	I1202 11:30:38.037024   14046 main.go:141] libmachine: (addons-093588) DBG | trying to create private KVM network mk-addons-093588 192.168.39.0/24...
	I1202 11:30:38.095436   14046 main.go:141] libmachine: (addons-093588) DBG | private KVM network mk-addons-093588 192.168.39.0/24 created
	I1202 11:30:38.095476   14046 main.go:141] libmachine: (addons-093588) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588 ...
	I1202 11:30:38.095496   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.095389   14068 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:30:38.095510   14046 main.go:141] libmachine: (addons-093588) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:30:38.095536   14046 main.go:141] libmachine: (addons-093588) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:30:38.351649   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.351512   14068 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa...
	I1202 11:30:38.416171   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.416080   14068 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/addons-093588.rawdisk...
	I1202 11:30:38.416198   14046 main.go:141] libmachine: (addons-093588) DBG | Writing magic tar header
	I1202 11:30:38.416275   14046 main.go:141] libmachine: (addons-093588) DBG | Writing SSH key tar header
	I1202 11:30:38.416312   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.416182   14068 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588 ...
	I1202 11:30:38.416332   14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588 (perms=drwx------)
	I1202 11:30:38.416347   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588
	I1202 11:30:38.416361   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:30:38.416368   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:30:38.416379   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:30:38.416384   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:30:38.416391   14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:30:38.416403   14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:30:38.416414   14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:30:38.416422   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:30:38.416433   14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:30:38.416445   14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home
	I1202 11:30:38.416458   14046 main.go:141] libmachine: (addons-093588) DBG | Skipping /home - not owner
	I1202 11:30:38.416469   14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:30:38.416477   14046 main.go:141] libmachine: (addons-093588) Creating domain...
	I1202 11:30:38.417348   14046 main.go:141] libmachine: (addons-093588) define libvirt domain using xml: 
	I1202 11:30:38.417385   14046 main.go:141] libmachine: (addons-093588) <domain type='kvm'>
	I1202 11:30:38.417398   14046 main.go:141] libmachine: (addons-093588)   <name>addons-093588</name>
	I1202 11:30:38.417413   14046 main.go:141] libmachine: (addons-093588)   <memory unit='MiB'>4000</memory>
	I1202 11:30:38.417424   14046 main.go:141] libmachine: (addons-093588)   <vcpu>2</vcpu>
	I1202 11:30:38.417430   14046 main.go:141] libmachine: (addons-093588)   <features>
	I1202 11:30:38.417442   14046 main.go:141] libmachine: (addons-093588)     <acpi/>
	I1202 11:30:38.417452   14046 main.go:141] libmachine: (addons-093588)     <apic/>
	I1202 11:30:38.417460   14046 main.go:141] libmachine: (addons-093588)     <pae/>
	I1202 11:30:38.417469   14046 main.go:141] libmachine: (addons-093588)     
	I1202 11:30:38.417482   14046 main.go:141] libmachine: (addons-093588)   </features>
	I1202 11:30:38.417492   14046 main.go:141] libmachine: (addons-093588)   <cpu mode='host-passthrough'>
	I1202 11:30:38.417497   14046 main.go:141] libmachine: (addons-093588)   
	I1202 11:30:38.417520   14046 main.go:141] libmachine: (addons-093588)   </cpu>
	I1202 11:30:38.417539   14046 main.go:141] libmachine: (addons-093588)   <os>
	I1202 11:30:38.417549   14046 main.go:141] libmachine: (addons-093588)     <type>hvm</type>
	I1202 11:30:38.417564   14046 main.go:141] libmachine: (addons-093588)     <boot dev='cdrom'/>
	I1202 11:30:38.417575   14046 main.go:141] libmachine: (addons-093588)     <boot dev='hd'/>
	I1202 11:30:38.417584   14046 main.go:141] libmachine: (addons-093588)     <bootmenu enable='no'/>
	I1202 11:30:38.417595   14046 main.go:141] libmachine: (addons-093588)   </os>
	I1202 11:30:38.417604   14046 main.go:141] libmachine: (addons-093588)   <devices>
	I1202 11:30:38.417614   14046 main.go:141] libmachine: (addons-093588)     <disk type='file' device='cdrom'>
	I1202 11:30:38.417628   14046 main.go:141] libmachine: (addons-093588)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/boot2docker.iso'/>
	I1202 11:30:38.417650   14046 main.go:141] libmachine: (addons-093588)       <target dev='hdc' bus='scsi'/>
	I1202 11:30:38.417668   14046 main.go:141] libmachine: (addons-093588)       <readonly/>
	I1202 11:30:38.417681   14046 main.go:141] libmachine: (addons-093588)     </disk>
	I1202 11:30:38.417693   14046 main.go:141] libmachine: (addons-093588)     <disk type='file' device='disk'>
	I1202 11:30:38.417706   14046 main.go:141] libmachine: (addons-093588)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:30:38.417720   14046 main.go:141] libmachine: (addons-093588)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/addons-093588.rawdisk'/>
	I1202 11:30:38.417732   14046 main.go:141] libmachine: (addons-093588)       <target dev='hda' bus='virtio'/>
	I1202 11:30:38.417743   14046 main.go:141] libmachine: (addons-093588)     </disk>
	I1202 11:30:38.417756   14046 main.go:141] libmachine: (addons-093588)     <interface type='network'>
	I1202 11:30:38.417768   14046 main.go:141] libmachine: (addons-093588)       <source network='mk-addons-093588'/>
	I1202 11:30:38.417780   14046 main.go:141] libmachine: (addons-093588)       <model type='virtio'/>
	I1202 11:30:38.417789   14046 main.go:141] libmachine: (addons-093588)     </interface>
	I1202 11:30:38.417800   14046 main.go:141] libmachine: (addons-093588)     <interface type='network'>
	I1202 11:30:38.417815   14046 main.go:141] libmachine: (addons-093588)       <source network='default'/>
	I1202 11:30:38.417824   14046 main.go:141] libmachine: (addons-093588)       <model type='virtio'/>
	I1202 11:30:38.417832   14046 main.go:141] libmachine: (addons-093588)     </interface>
	I1202 11:30:38.417847   14046 main.go:141] libmachine: (addons-093588)     <serial type='pty'>
	I1202 11:30:38.417858   14046 main.go:141] libmachine: (addons-093588)       <target port='0'/>
	I1202 11:30:38.417868   14046 main.go:141] libmachine: (addons-093588)     </serial>
	I1202 11:30:38.417882   14046 main.go:141] libmachine: (addons-093588)     <console type='pty'>
	I1202 11:30:38.417900   14046 main.go:141] libmachine: (addons-093588)       <target type='serial' port='0'/>
	I1202 11:30:38.417909   14046 main.go:141] libmachine: (addons-093588)     </console>
	I1202 11:30:38.417919   14046 main.go:141] libmachine: (addons-093588)     <rng model='virtio'>
	I1202 11:30:38.417930   14046 main.go:141] libmachine: (addons-093588)       <backend model='random'>/dev/random</backend>
	I1202 11:30:38.417942   14046 main.go:141] libmachine: (addons-093588)     </rng>
	I1202 11:30:38.417955   14046 main.go:141] libmachine: (addons-093588)     
	I1202 11:30:38.417965   14046 main.go:141] libmachine: (addons-093588)     
	I1202 11:30:38.417974   14046 main.go:141] libmachine: (addons-093588)   </devices>
	I1202 11:30:38.417982   14046 main.go:141] libmachine: (addons-093588) </domain>
	I1202 11:30:38.417991   14046 main.go:141] libmachine: (addons-093588) 
	I1202 11:30:38.423153   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:41:86:b0 in network default
	I1202 11:30:38.423632   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:38.423650   14046 main.go:141] libmachine: (addons-093588) Ensuring networks are active...
	I1202 11:30:38.424163   14046 main.go:141] libmachine: (addons-093588) Ensuring network default is active
	I1202 11:30:38.424413   14046 main.go:141] libmachine: (addons-093588) Ensuring network mk-addons-093588 is active
	I1202 11:30:38.424831   14046 main.go:141] libmachine: (addons-093588) Getting domain xml...
	I1202 11:30:38.425386   14046 main.go:141] libmachine: (addons-093588) Creating domain...
	I1202 11:30:39.768153   14046 main.go:141] libmachine: (addons-093588) Waiting to get IP...
	I1202 11:30:39.769048   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:39.769406   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:39.769434   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:39.769391   14068 retry.go:31] will retry after 262.465444ms: waiting for machine to come up
	I1202 11:30:40.033678   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:40.034019   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:40.034047   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:40.033987   14068 retry.go:31] will retry after 268.465291ms: waiting for machine to come up
	I1202 11:30:40.304474   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:40.304856   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:40.304886   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:40.304845   14068 retry.go:31] will retry after 459.329717ms: waiting for machine to come up
	I1202 11:30:40.765148   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:40.765539   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:40.765576   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:40.765500   14068 retry.go:31] will retry after 473.589572ms: waiting for machine to come up
	I1202 11:30:41.241029   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:41.241356   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:41.241402   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:41.241309   14068 retry.go:31] will retry after 489.24768ms: waiting for machine to come up
	I1202 11:30:41.732001   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:41.732402   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:41.732428   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:41.732337   14068 retry.go:31] will retry after 764.713135ms: waiting for machine to come up
	I1202 11:30:42.498043   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:42.498440   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:42.498462   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:42.498418   14068 retry.go:31] will retry after 1.105216684s: waiting for machine to come up
	I1202 11:30:43.605335   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:43.605759   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:43.605784   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:43.605714   14068 retry.go:31] will retry after 1.334125941s: waiting for machine to come up
	I1202 11:30:44.942153   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:44.942579   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:44.942604   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:44.942535   14068 retry.go:31] will retry after 1.384283544s: waiting for machine to come up
	I1202 11:30:46.329052   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:46.329455   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:46.329485   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:46.329405   14068 retry.go:31] will retry after 1.997806074s: waiting for machine to come up
	I1202 11:30:48.328389   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:48.328833   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:48.328861   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:48.328789   14068 retry.go:31] will retry after 2.344508632s: waiting for machine to come up
	I1202 11:30:50.676551   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:50.676981   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:50.677010   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:50.676934   14068 retry.go:31] will retry after 3.069367748s: waiting for machine to come up
	I1202 11:30:53.748570   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:53.748926   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:53.748950   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:53.748888   14068 retry.go:31] will retry after 2.996899134s: waiting for machine to come up
	I1202 11:30:56.749121   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:30:56.749572   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
	I1202 11:30:56.749597   14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:56.749520   14068 retry.go:31] will retry after 4.228069851s: waiting for machine to come up
	I1202 11:31:00.981506   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:00.981936   14046 main.go:141] libmachine: (addons-093588) Found IP for machine: 192.168.39.203
	I1202 11:31:00.981958   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has current primary IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:00.981964   14046 main.go:141] libmachine: (addons-093588) Reserving static IP address...
	I1202 11:31:00.982295   14046 main.go:141] libmachine: (addons-093588) DBG | unable to find host DHCP lease matching {name: "addons-093588", mac: "52:54:00:8a:ff:d0", ip: "192.168.39.203"} in network mk-addons-093588
	I1202 11:31:01.048415   14046 main.go:141] libmachine: (addons-093588) DBG | Getting to WaitForSSH function...
	I1202 11:31:01.048442   14046 main.go:141] libmachine: (addons-093588) Reserved static IP address: 192.168.39.203
	I1202 11:31:01.048454   14046 main.go:141] libmachine: (addons-093588) Waiting for SSH to be available...
	I1202 11:31:01.051059   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.051438   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.051472   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.051619   14046 main.go:141] libmachine: (addons-093588) DBG | Using SSH client type: external
	I1202 11:31:01.051638   14046 main.go:141] libmachine: (addons-093588) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa (-rw-------)
	I1202 11:31:01.051663   14046 main.go:141] libmachine: (addons-093588) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:31:01.051673   14046 main.go:141] libmachine: (addons-093588) DBG | About to run SSH command:
	I1202 11:31:01.051681   14046 main.go:141] libmachine: (addons-093588) DBG | exit 0
	I1202 11:31:01.179840   14046 main.go:141] libmachine: (addons-093588) DBG | SSH cmd err, output: <nil>: 
	I1202 11:31:01.180069   14046 main.go:141] libmachine: (addons-093588) KVM machine creation complete!
	I1202 11:31:01.180372   14046 main.go:141] libmachine: (addons-093588) Calling .GetConfigRaw
	I1202 11:31:01.181030   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:01.181223   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:01.181368   14046 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:31:01.181383   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:01.182471   14046 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:31:01.182489   14046 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:31:01.182497   14046 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:31:01.182504   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:01.184526   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.184815   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.184837   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.184948   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:01.185116   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.185254   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.185411   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:01.185565   14046 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:01.185779   14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1202 11:31:01.185793   14046 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:31:01.282941   14046 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:31:01.282962   14046 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:31:01.282973   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:01.285619   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.285952   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.285985   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.286145   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:01.286305   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.286461   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.286576   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:01.286731   14046 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:01.286920   14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1202 11:31:01.286931   14046 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:31:01.388462   14046 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:31:01.388501   14046 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:31:01.388507   14046 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:31:01.388512   14046 main.go:141] libmachine: (addons-093588) Calling .GetMachineName
	I1202 11:31:01.388677   14046 buildroot.go:166] provisioning hostname "addons-093588"
	I1202 11:31:01.388696   14046 main.go:141] libmachine: (addons-093588) Calling .GetMachineName
	I1202 11:31:01.388841   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:01.391137   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.391506   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.391534   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.391652   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:01.391816   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.391965   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.392102   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:01.392246   14046 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:01.392391   14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1202 11:31:01.392402   14046 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-093588 && echo "addons-093588" | sudo tee /etc/hostname
	I1202 11:31:01.506202   14046 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093588
	
	I1202 11:31:01.506240   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:01.509060   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.509411   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.509432   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.509608   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:01.509804   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.509958   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.510079   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:01.510222   14046 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:01.510393   14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1202 11:31:01.510415   14046 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-093588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-093588/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-093588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:31:01.616311   14046 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:31:01.616347   14046 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:31:01.616393   14046 buildroot.go:174] setting up certificates
	I1202 11:31:01.616410   14046 provision.go:84] configureAuth start
	I1202 11:31:01.616430   14046 main.go:141] libmachine: (addons-093588) Calling .GetMachineName
	I1202 11:31:01.616682   14046 main.go:141] libmachine: (addons-093588) Calling .GetIP
	I1202 11:31:01.619505   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.620156   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.620182   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.620327   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:01.622275   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.622543   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.622570   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.622692   14046 provision.go:143] copyHostCerts
	I1202 11:31:01.622767   14046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:31:01.622899   14046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:31:01.622955   14046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:31:01.623001   14046 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.addons-093588 san=[127.0.0.1 192.168.39.203 addons-093588 localhost minikube]
	I1202 11:31:01.923775   14046 provision.go:177] copyRemoteCerts
	I1202 11:31:01.923832   14046 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:31:01.923854   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:01.926193   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.926521   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:01.926551   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:01.926687   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:01.926841   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:01.926972   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:01.927075   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:02.005579   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:31:02.029137   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:31:02.051665   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 11:31:02.074035   14046 provision.go:87] duration metric: took 457.609565ms to configureAuth
	I1202 11:31:02.074059   14046 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:31:02.074217   14046 config.go:182] Loaded profile config "addons-093588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:31:02.074283   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:02.076631   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.076987   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.077013   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.077164   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:02.077336   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.077492   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.077615   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:02.077760   14046 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:02.077906   14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1202 11:31:02.077920   14046 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:31:02.287644   14046 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:31:02.287666   14046 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:31:02.287672   14046 main.go:141] libmachine: (addons-093588) Calling .GetURL
	I1202 11:31:02.288858   14046 main.go:141] libmachine: (addons-093588) DBG | Using libvirt version 6000000
	I1202 11:31:02.290750   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.291050   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.291080   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.291195   14046 main.go:141] libmachine: Docker is up and running!
	I1202 11:31:02.291216   14046 main.go:141] libmachine: Reticulating splines...
	I1202 11:31:02.291222   14046 client.go:171] duration metric: took 24.757312526s to LocalClient.Create
	I1202 11:31:02.291244   14046 start.go:167] duration metric: took 24.757374154s to libmachine.API.Create "addons-093588"
	I1202 11:31:02.291261   14046 start.go:293] postStartSetup for "addons-093588" (driver="kvm2")
	I1202 11:31:02.291272   14046 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:31:02.291288   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:02.291502   14046 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:31:02.291522   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:02.293349   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.293594   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.293619   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.293743   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:02.293886   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.294032   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:02.294145   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:02.373911   14046 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:31:02.378111   14046 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:31:02.378132   14046 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:31:02.378192   14046 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:31:02.378214   14046 start.go:296] duration metric: took 86.945972ms for postStartSetup
	I1202 11:31:02.378245   14046 main.go:141] libmachine: (addons-093588) Calling .GetConfigRaw
	I1202 11:31:02.378753   14046 main.go:141] libmachine: (addons-093588) Calling .GetIP
	I1202 11:31:02.380981   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.381316   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.381361   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.381564   14046 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/config.json ...
	I1202 11:31:02.381722   14046 start.go:128] duration metric: took 24.864904519s to createHost
	I1202 11:31:02.381743   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:02.383934   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.384272   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.384314   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.384473   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:02.384686   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.384826   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.384934   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:02.385083   14046 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:02.385236   14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.203 22 <nil> <nil>}
	I1202 11:31:02.385245   14046 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:31:02.488569   14046 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733139062.461766092
	
	I1202 11:31:02.488586   14046 fix.go:216] guest clock: 1733139062.461766092
	I1202 11:31:02.488594   14046 fix.go:229] Guest: 2024-12-02 11:31:02.461766092 +0000 UTC Remote: 2024-12-02 11:31:02.381733026 +0000 UTC m=+24.960080527 (delta=80.033066ms)
	I1202 11:31:02.488611   14046 fix.go:200] guest clock delta is within tolerance: 80.033066ms
	I1202 11:31:02.488616   14046 start.go:83] releasing machines lock for "addons-093588", held for 24.971869861s
	I1202 11:31:02.488633   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:02.488804   14046 main.go:141] libmachine: (addons-093588) Calling .GetIP
	I1202 11:31:02.491410   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.491718   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.491740   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.491912   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:02.492303   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:02.492498   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:02.492599   14046 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:31:02.492640   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:02.492659   14046 ssh_runner.go:195] Run: cat /version.json
	I1202 11:31:02.492682   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:02.495098   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.495504   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.495523   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.495555   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.495683   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:02.495836   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.495981   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:02.496036   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:02.496054   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:02.496127   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:02.496309   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:02.496461   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:02.496596   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:02.496733   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:02.593747   14046 ssh_runner.go:195] Run: systemctl --version
	I1202 11:31:02.599449   14046 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:31:02.754591   14046 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:31:02.760318   14046 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:31:02.760381   14046 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:31:02.775654   14046 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:31:02.775672   14046 start.go:495] detecting cgroup driver to use...
	I1202 11:31:02.775730   14046 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:31:02.790974   14046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:31:02.803600   14046 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:31:02.803656   14046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:31:02.816048   14046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:31:02.828952   14046 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:31:02.939245   14046 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:31:03.103186   14046 docker.go:233] disabling docker service ...
	I1202 11:31:03.103247   14046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:31:03.117174   14046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:31:03.129365   14046 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:31:03.241601   14046 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:31:03.354550   14046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:31:03.368814   14046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:31:03.387288   14046 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:31:03.387336   14046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.397743   14046 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:31:03.397802   14046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.408206   14046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.418070   14046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.428088   14046 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:31:03.438226   14046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.448028   14046 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.464548   14046 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:03.474482   14046 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:31:03.483342   14046 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:31:03.483384   14046 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:31:03.495424   14046 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:31:03.504365   14046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:31:03.616131   14046 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:31:03.806820   14046 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:31:03.806906   14046 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:31:03.811970   14046 start.go:563] Will wait 60s for crictl version
	I1202 11:31:03.812015   14046 ssh_runner.go:195] Run: which crictl
	I1202 11:31:03.815656   14046 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:31:03.854668   14046 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:31:03.854771   14046 ssh_runner.go:195] Run: crio --version
	I1202 11:31:03.883503   14046 ssh_runner.go:195] Run: crio --version
	I1202 11:31:03.943735   14046 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:31:03.978507   14046 main.go:141] libmachine: (addons-093588) Calling .GetIP
	I1202 11:31:03.981079   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:03.981440   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:03.981469   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:03.981694   14046 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:31:03.986029   14046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:31:03.999160   14046 kubeadm.go:883] updating cluster {Name:addons-093588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 11:31:03.999273   14046 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:31:03.999318   14046 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:31:04.032753   14046 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 11:31:04.032848   14046 ssh_runner.go:195] Run: which lz4
	I1202 11:31:04.036732   14046 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 11:31:04.040941   14046 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 11:31:04.040969   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 11:31:05.312874   14046 crio.go:462] duration metric: took 1.276172912s to copy over tarball
	I1202 11:31:05.312957   14046 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 11:31:07.438469   14046 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.125483138s)
	I1202 11:31:07.438502   14046 crio.go:469] duration metric: took 2.125592032s to extract the tarball
	I1202 11:31:07.438513   14046 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 11:31:07.475913   14046 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:31:07.526664   14046 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:31:07.526685   14046 cache_images.go:84] Images are preloaded, skipping loading
	I1202 11:31:07.526695   14046 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.2 crio true true} ...
	I1202 11:31:07.526796   14046 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-093588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:31:07.526870   14046 ssh_runner.go:195] Run: crio config
	I1202 11:31:07.582564   14046 cni.go:84] Creating CNI manager for ""
	I1202 11:31:07.582584   14046 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 11:31:07.582593   14046 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 11:31:07.582614   14046 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-093588 NodeName:addons-093588 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 11:31:07.582727   14046 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-093588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.203"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 11:31:07.582780   14046 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:31:07.592378   14046 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:31:07.592421   14046 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 11:31:07.601397   14046 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1202 11:31:07.617029   14046 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:31:07.632123   14046 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1202 11:31:07.647544   14046 ssh_runner.go:195] Run: grep 192.168.39.203	control-plane.minikube.internal$ /etc/hosts
	I1202 11:31:07.651140   14046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.203	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:31:07.662518   14046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:31:07.774786   14046 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:31:07.795670   14046 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588 for IP: 192.168.39.203
	I1202 11:31:07.795689   14046 certs.go:194] generating shared ca certs ...
	I1202 11:31:07.795704   14046 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:07.795860   14046 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:31:07.881230   14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt ...
	I1202 11:31:07.881255   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt: {Name:mkb25dcf874cc76262dd87f7954dc5def047ba80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:07.881433   14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key ...
	I1202 11:31:07.881447   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key: {Name:mk24aaecfce06715328a2e1bdf78912e66e577e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:07.881546   14046 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:31:08.066592   14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt ...
	I1202 11:31:08.066617   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt: {Name:mk353521566f5b511b2c49b5facbb9d7e8a55579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.066785   14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key ...
	I1202 11:31:08.066799   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key: {Name:mk0fae51faecacd368a9e9845e8ec1cc10ac1c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.066891   14046 certs.go:256] generating profile certs ...
	I1202 11:31:08.066943   14046 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.key
	I1202 11:31:08.066963   14046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt with IP's: []
	I1202 11:31:08.199504   14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt ...
	I1202 11:31:08.199534   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: {Name:mke09ad3d888dc6da1ff7604f62658a689c18924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.199693   14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.key ...
	I1202 11:31:08.199704   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.key: {Name:mk29de8a87eafaedfa0731583b4b03810c89d586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.199771   14046 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key.522ccd78
	I1202 11:31:08.199789   14046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt.522ccd78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203]
	I1202 11:31:08.366826   14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt.522ccd78 ...
	I1202 11:31:08.366857   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt.522ccd78: {Name:mkccb5564ec2f6a186fbab8f5cb67d658caada7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.367032   14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key.522ccd78 ...
	I1202 11:31:08.367046   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key.522ccd78: {Name:mk589a09a46c7953a1cc24cad0c706bf9dfb6e43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.367125   14046 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt.522ccd78 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt
	I1202 11:31:08.367205   14046 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key.522ccd78 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key
	I1202 11:31:08.367257   14046 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.key
	I1202 11:31:08.367277   14046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.crt with IP's: []
	I1202 11:31:08.450648   14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.crt ...
	I1202 11:31:08.450679   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.crt: {Name:mke55f3a980df3599f606cdcab7f35740d5da41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.450843   14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.key ...
	I1202 11:31:08.450854   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.key: {Name:mk944302d6559b5e702f266fc95edf52b4fa7b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:08.451514   14046 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:31:08.451556   14046 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:31:08.451584   14046 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:31:08.451613   14046 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:31:08.452184   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:31:08.489217   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:31:08.517562   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:31:08.543285   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:31:08.569173   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 11:31:08.594856   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 11:31:08.620538   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:31:08.645986   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 11:31:08.668456   14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:31:08.690521   14046 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 11:31:08.706980   14046 ssh_runner.go:195] Run: openssl version
	I1202 11:31:08.712732   14046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:31:08.723032   14046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:31:08.727495   14046 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:31:08.727526   14046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:31:08.733224   14046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:31:08.743680   14046 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:31:08.747664   14046 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:31:08.747708   14046 kubeadm.go:392] StartCluster: {Name:addons-093588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:31:08.747775   14046 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 11:31:08.747814   14046 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 11:31:08.787888   14046 cri.go:89] found id: ""
	I1202 11:31:08.787950   14046 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 11:31:08.797362   14046 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 11:31:08.809451   14046 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 11:31:08.820282   14046 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 11:31:08.820297   14046 kubeadm.go:157] found existing configuration files:
	
	I1202 11:31:08.820333   14046 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 11:31:08.828677   14046 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 11:31:08.828711   14046 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 11:31:08.837308   14046 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 11:31:08.845525   14046 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 11:31:08.845558   14046 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 11:31:08.854180   14046 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 11:31:08.862334   14046 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 11:31:08.862373   14046 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 11:31:08.871107   14046 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 11:31:08.879485   14046 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 11:31:08.879516   14046 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 11:31:08.888193   14046 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 11:31:09.042807   14046 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 11:31:19.640576   14046 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 11:31:19.640684   14046 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 11:31:19.640804   14046 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 11:31:19.640929   14046 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 11:31:19.641054   14046 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 11:31:19.641154   14046 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 11:31:19.642488   14046 out.go:235]   - Generating certificates and keys ...
	I1202 11:31:19.642574   14046 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 11:31:19.642657   14046 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 11:31:19.642746   14046 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 11:31:19.642837   14046 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 11:31:19.642899   14046 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 11:31:19.642942   14046 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 11:31:19.642987   14046 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 11:31:19.643101   14046 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-093588 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
	I1202 11:31:19.643167   14046 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 11:31:19.643335   14046 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-093588 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
	I1202 11:31:19.643411   14046 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 11:31:19.643467   14046 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 11:31:19.643523   14046 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 11:31:19.643615   14046 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 11:31:19.643692   14046 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 11:31:19.643782   14046 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 11:31:19.643865   14046 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 11:31:19.643943   14046 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 11:31:19.643993   14046 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 11:31:19.644064   14046 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 11:31:19.644161   14046 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 11:31:19.645478   14046 out.go:235]   - Booting up control plane ...
	I1202 11:31:19.645576   14046 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 11:31:19.645645   14046 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 11:31:19.645701   14046 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 11:31:19.645795   14046 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 11:31:19.645918   14046 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 11:31:19.645986   14046 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 11:31:19.646132   14046 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 11:31:19.646250   14046 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 11:31:19.646306   14046 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.795667ms
	I1202 11:31:19.646374   14046 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 11:31:19.646429   14046 kubeadm.go:310] [api-check] The API server is healthy after 5.502118168s
	I1202 11:31:19.646515   14046 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 11:31:19.646628   14046 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 11:31:19.646678   14046 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 11:31:19.646851   14046 kubeadm.go:310] [mark-control-plane] Marking the node addons-093588 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 11:31:19.646906   14046 kubeadm.go:310] [bootstrap-token] Using token: 1k1sz6.8l7j2y5vp52tcjwr
	I1202 11:31:19.648784   14046 out.go:235]   - Configuring RBAC rules ...
	I1202 11:31:19.648889   14046 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 11:31:19.648963   14046 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 11:31:19.649092   14046 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 11:31:19.649233   14046 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 11:31:19.649389   14046 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 11:31:19.649475   14046 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 11:31:19.649614   14046 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 11:31:19.649654   14046 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 11:31:19.649717   14046 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 11:31:19.649728   14046 kubeadm.go:310] 
	I1202 11:31:19.649818   14046 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 11:31:19.649829   14046 kubeadm.go:310] 
	I1202 11:31:19.649939   14046 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 11:31:19.649949   14046 kubeadm.go:310] 
	I1202 11:31:19.649985   14046 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 11:31:19.650037   14046 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 11:31:19.650080   14046 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 11:31:19.650086   14046 kubeadm.go:310] 
	I1202 11:31:19.650138   14046 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 11:31:19.650144   14046 kubeadm.go:310] 
	I1202 11:31:19.650183   14046 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 11:31:19.650189   14046 kubeadm.go:310] 
	I1202 11:31:19.650234   14046 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 11:31:19.650304   14046 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 11:31:19.650365   14046 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 11:31:19.650373   14046 kubeadm.go:310] 
	I1202 11:31:19.650440   14046 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 11:31:19.650514   14046 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 11:31:19.650522   14046 kubeadm.go:310] 
	I1202 11:31:19.650595   14046 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1k1sz6.8l7j2y5vp52tcjwr \
	I1202 11:31:19.650697   14046 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 11:31:19.650725   14046 kubeadm.go:310] 	--control-plane 
	I1202 11:31:19.650735   14046 kubeadm.go:310] 
	I1202 11:31:19.650849   14046 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 11:31:19.650859   14046 kubeadm.go:310] 
	I1202 11:31:19.650970   14046 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1k1sz6.8l7j2y5vp52tcjwr \
	I1202 11:31:19.651080   14046 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 11:31:19.651101   14046 cni.go:84] Creating CNI manager for ""
	I1202 11:31:19.651113   14046 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 11:31:19.652324   14046 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 11:31:19.653350   14046 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 11:31:19.663856   14046 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 11:31:19.683635   14046 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 11:31:19.683704   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:19.683726   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-093588 minikube.k8s.io/updated_at=2024_12_02T11_31_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=addons-093588 minikube.k8s.io/primary=true
	I1202 11:31:19.809186   14046 ops.go:34] apiserver oom_adj: -16
	I1202 11:31:19.809308   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:20.309679   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:20.809603   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:21.310155   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:21.809913   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:22.310060   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:22.809479   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:23.309701   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:23.809394   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:24.310391   14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:24.414631   14046 kubeadm.go:1113] duration metric: took 4.730985398s to wait for elevateKubeSystemPrivileges
	I1202 11:31:24.414668   14046 kubeadm.go:394] duration metric: took 15.666963518s to StartCluster
	I1202 11:31:24.414689   14046 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:24.414816   14046 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:31:24.415263   14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:24.415607   14046 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:31:24.415637   14046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 11:31:24.415683   14046 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 11:31:24.415803   14046 addons.go:69] Setting inspektor-gadget=true in profile "addons-093588"
	I1202 11:31:24.415812   14046 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-093588"
	I1202 11:31:24.415821   14046 addons.go:234] Setting addon inspektor-gadget=true in "addons-093588"
	I1202 11:31:24.415825   14046 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-093588"
	I1202 11:31:24.415833   14046 addons.go:69] Setting storage-provisioner=true in profile "addons-093588"
	I1202 11:31:24.415851   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.415862   14046 addons.go:234] Setting addon storage-provisioner=true in "addons-093588"
	I1202 11:31:24.415871   14046 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-093588"
	I1202 11:31:24.415888   14046 config.go:182] Loaded profile config "addons-093588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:31:24.415899   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.415899   14046 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-093588"
	I1202 11:31:24.415926   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.415938   14046 addons.go:69] Setting metrics-server=true in profile "addons-093588"
	I1202 11:31:24.415951   14046 addons.go:234] Setting addon metrics-server=true in "addons-093588"
	I1202 11:31:24.415975   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.415801   14046 addons.go:69] Setting yakd=true in profile "addons-093588"
	I1202 11:31:24.416328   14046 addons.go:69] Setting volcano=true in profile "addons-093588"
	I1202 11:31:24.416332   14046 addons.go:234] Setting addon yakd=true in "addons-093588"
	I1202 11:31:24.416340   14046 addons.go:234] Setting addon volcano=true in "addons-093588"
	I1202 11:31:24.416348   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416362   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416364   14046 addons.go:69] Setting volumesnapshots=true in profile "addons-093588"
	I1202 11:31:24.416354   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416376   14046 addons.go:234] Setting addon volumesnapshots=true in "addons-093588"
	I1202 11:31:24.416348   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416391   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416392   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.416393   14046 addons.go:69] Setting registry=true in profile "addons-093588"
	I1202 11:31:24.416405   14046 addons.go:234] Setting addon registry=true in "addons-093588"
	I1202 11:31:24.416412   14046 addons.go:69] Setting cloud-spanner=true in profile "addons-093588"
	I1202 11:31:24.416416   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416426   14046 addons.go:234] Setting addon cloud-spanner=true in "addons-093588"
	I1202 11:31:24.416405   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416426   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.416450   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416483   14046 addons.go:69] Setting gcp-auth=true in profile "addons-093588"
	I1202 11:31:24.416504   14046 addons.go:69] Setting ingress=true in profile "addons-093588"
	I1202 11:31:24.416357   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.416517   14046 addons.go:234] Setting addon ingress=true in "addons-093588"
	I1202 11:31:24.416519   14046 mustload.go:65] Loading cluster: addons-093588
	I1202 11:31:24.416532   14046 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-093588"
	I1202 11:31:24.416560   14046 addons.go:69] Setting default-storageclass=true in profile "addons-093588"
	I1202 11:31:24.416586   14046 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-093588"
	I1202 11:31:24.416590   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416632   14046 addons.go:69] Setting ingress-dns=true in profile "addons-093588"
	I1202 11:31:24.416650   14046 addons.go:234] Setting addon ingress-dns=true in "addons-093588"
	I1202 11:31:24.416682   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.416780   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416807   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416363   14046 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-093588"
	I1202 11:31:24.416859   14046 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-093588"
	I1202 11:31:24.416872   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416890   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.416900   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416439   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416920   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.416352   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.416565   14046 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-093588"
	I1202 11:31:24.416997   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.417021   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.417031   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.417043   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.417073   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.417115   14046 config.go:182] Loaded profile config "addons-093588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:31:24.417236   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.417280   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.417310   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.417478   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.417508   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.417581   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.417639   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.417599   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.417818   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.418318   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.418354   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.418457   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.418520   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.419789   14046 out.go:177] * Verifying Kubernetes components...
	I1202 11:31:24.421204   14046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:31:24.434933   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I1202 11:31:24.456366   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
	I1202 11:31:24.456379   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36237
	I1202 11:31:24.456385   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
	I1202 11:31:24.456522   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
	I1202 11:31:24.457088   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.457138   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.457205   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.458101   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.458260   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.458270   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.458324   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.458368   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.457145   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.458977   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.458999   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.459125   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.459137   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.459192   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.459310   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.459320   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.459719   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.459746   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.461092   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.461110   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.461162   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.461200   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.461239   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.465673   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.466074   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.466128   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.466547   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.466716   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.466745   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.466905   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.466931   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.471312   14046 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-093588"
	I1202 11:31:24.471365   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.471748   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.471776   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.496148   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46685
	I1202 11:31:24.496823   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.496855   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I1202 11:31:24.497309   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.497323   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.497591   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.497750   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.497825   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.498357   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.498378   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.498441   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
	I1202 11:31:24.498718   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41033
	I1202 11:31:24.498914   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.499256   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.499500   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.499512   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.500301   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.500723   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.500766   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.500873   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.500903   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.500931   14046 addons.go:234] Setting addon default-storageclass=true in "addons-093588"
	I1202 11:31:24.501152   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.501175   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.501511   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.501552   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.501848   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.501865   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.502157   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46689
	I1202 11:31:24.502338   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.502394   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I1202 11:31:24.504903   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I1202 11:31:24.504935   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.504948   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I1202 11:31:24.504909   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46127
	I1202 11:31:24.505440   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.505449   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.505787   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.505960   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.505975   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.506128   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.506144   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.506220   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35429
	I1202 11:31:24.506436   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.506455   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.506465   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.506589   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.506809   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.506836   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.506857   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.506898   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.507198   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.507202   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.507232   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.507197   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.507262   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.507386   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.507402   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.507680   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.507716   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.507930   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.507970   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.508215   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.508266   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.508349   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.508886   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.508912   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.509328   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.509388   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:24.509745   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.509772   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.514152   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I1202 11:31:24.514536   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.515551   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.515567   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.515900   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.516060   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.516750   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.516781   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.518417   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.520369   14046 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 11:31:24.521684   14046 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:31:24.521708   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 11:31:24.521726   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.521971   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40347
	I1202 11:31:24.522467   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.522948   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.522967   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.523428   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.524037   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.524073   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.525278   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.525908   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I1202 11:31:24.525964   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.525982   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.526153   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.526308   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37297
	I1202 11:31:24.526334   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.526703   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.526823   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.527519   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I1202 11:31:24.528000   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
	I1202 11:31:24.538245   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44681
	I1202 11:31:24.538353   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34773
	I1202 11:31:24.538752   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.539216   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.539400   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.539426   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.539696   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.539718   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.539894   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
	I1202 11:31:24.540083   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.540314   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.540385   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.540470   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I1202 11:31:24.540924   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.540946   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.541203   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.541339   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.541664   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.541754   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.542181   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.542207   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.542260   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.542496   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41727
	I1202 11:31:24.542555   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.542776   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.543259   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.543292   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.543731   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.544610   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.544628   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.544664   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.545126   14046 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1202 11:31:24.545168   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.545195   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.545269   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 11:31:24.545304   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.545395   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.545435   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.545470   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I1202 11:31:24.545618   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.546338   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.546345   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.546363   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.546433   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.546568   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.546647   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.547233   14046 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1202 11:31:24.547248   14046 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1202 11:31:24.547254   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.547267   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.547393   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.547414   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.547445   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.547394   14046 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1202 11:31:24.547458   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.547828   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.547575   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.547955   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.547954   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.547972   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.548039   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.548201   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1202 11:31:24.548709   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.548404   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.548416   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.548427   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.549098   14046 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1202 11:31:24.549523   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.549556   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.549153   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.549174   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.549198   14046 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 11:31:24.550744   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.550442   14046 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1202 11:31:24.550949   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 11:31:24.550979   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.551266   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:24.551295   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:24.551395   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:24.551754   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:24.551922   14046 out.go:177]   - Using image docker.io/registry:2.8.3
	I1202 11:31:24.553135   14046 out.go:177]   - Using image docker.io/busybox:stable
	I1202 11:31:24.553248   14046 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 11:31:24.553258   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 11:31:24.553274   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.551866   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 11:31:24.554056   14046 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I1202 11:31:24.554077   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:24.554099   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:24.554115   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:24.554335   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:24.554353   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	W1202 11:31:24.554435   14046 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 11:31:24.554726   14046 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 11:31:24.554748   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 11:31:24.554766   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.555827   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I1202 11:31:24.556405   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 11:31:24.556577   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.557402   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.557420   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.558057   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.558321   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 11:31:24.559321   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.559334   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.559360   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.559379   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.559664   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.559799   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.559949   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.559958   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.560013   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.560428   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.560586   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 11:31:24.561196   14046 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1202 11:31:24.561215   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 11:31:24.562449   14046 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 11:31:24.562471   14046 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 11:31:24.562476   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 11:31:24.562492   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.562522   14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 11:31:24.562532   14046 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 11:31:24.562547   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.562576   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.563149   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.564463   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.564709   14046 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 11:31:24.565746   14046 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1202 11:31:24.565810   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 11:31:24.565819   14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 11:31:24.565837   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.566364   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.566848   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.566871   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.566945   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.567112   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.567307   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.567663   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.567855   14046 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1202 11:31:24.567997   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.569106   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.569689   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.569721   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.569864   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.570008   14046 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1202 11:31:24.570020   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.570160   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.570283   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.570565   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.570706   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.570730   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.571250   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.571276   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.571330   14046 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 11:31:24.571343   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 11:31:24.571359   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.571508   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.571529   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.571688   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.571705   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.572025   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.572133   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.572305   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.572362   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.572452   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.572512   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.572564   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.572615   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.572667   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.572746   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.572801   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.572834   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.573137   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.573154   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.573204   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.573482   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.573956   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40303
	I1202 11:31:24.574649   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.575018   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.575407   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.575423   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.575488   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.575503   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.575585   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.575718   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.575817   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.575932   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.576165   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.576573   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.578063   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I1202 11:31:24.578225   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.578617   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.579075   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.579092   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.579154   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I1202 11:31:24.579518   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.579676   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.579703   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.579828   14046 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1202 11:31:24.580453   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.580474   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.580807   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.580980   14046 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 11:31:24.580998   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 11:31:24.581005   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.581024   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.583031   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.583283   14046 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 11:31:24.583297   14046 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 11:31:24.583313   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.583668   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I1202 11:31:24.583768   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37561
	I1202 11:31:24.584191   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.584544   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.584976   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.584990   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.585105   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.585117   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.585563   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.586229   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.586400   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.586629   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.586648   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.586682   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.586726   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.587020   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.587065   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.587113   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.587132   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.587150   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.587302   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.587360   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.587553   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.587779   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.587937   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.588096   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	W1202 11:31:24.588952   14046 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36208->192.168.39.203:22: read: connection reset by peer
	I1202 11:31:24.588979   14046 retry.go:31] will retry after 162.447336ms: ssh: handshake failed: read tcp 192.168.39.1:36208->192.168.39.203:22: read: connection reset by peer
	I1202 11:31:24.589096   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.589515   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.591066   14046 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 11:31:24.591067   14046 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 11:31:24.592118   14046 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 11:31:24.592128   14046 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 11:31:24.592143   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.592191   14046 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 11:31:24.592198   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 11:31:24.592207   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.595378   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.595644   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I1202 11:31:24.595803   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.595822   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.595882   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.596063   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.596170   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.596184   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:24.596388   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.596513   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.596802   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:24.596819   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:24.596932   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.596948   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	W1202 11:31:24.597052   14046 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36224->192.168.39.203:22: read: connection reset by peer
	I1202 11:31:24.597073   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.597073   14046 retry.go:31] will retry after 286.075051ms: ssh: handshake failed: read tcp 192.168.39.1:36224->192.168.39.203:22: read: connection reset by peer
	I1202 11:31:24.597103   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:24.597222   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.597240   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:24.597394   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.597488   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	W1202 11:31:24.598040   14046 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36232->192.168.39.203:22: read: connection reset by peer
	I1202 11:31:24.598119   14046 retry.go:31] will retry after 354.610148ms: ssh: handshake failed: read tcp 192.168.39.1:36232->192.168.39.203:22: read: connection reset by peer
	I1202 11:31:24.598499   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:24.599979   14046 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1202 11:31:24.601395   14046 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 11:31:24.601408   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1202 11:31:24.601419   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:24.603772   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.604008   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:24.604034   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:24.604262   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:24.604434   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:24.604557   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:24.604666   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:24.838994   14046 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:31:24.839176   14046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 11:31:24.858733   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 11:31:24.883859   14046 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 11:31:24.883887   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 11:31:24.906094   14046 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 11:31:24.906113   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1202 11:31:24.933933   14046 node_ready.go:35] waiting up to 6m0s for node "addons-093588" to be "Ready" ...
	I1202 11:31:24.937202   14046 node_ready.go:49] node "addons-093588" has status "Ready":"True"
	I1202 11:31:24.937231   14046 node_ready.go:38] duration metric: took 3.246311ms for node "addons-093588" to be "Ready" ...
	I1202 11:31:24.937242   14046 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:31:24.944817   14046 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:24.959764   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:31:24.974400   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 11:31:25.028401   14046 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 11:31:25.028429   14046 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 11:31:25.064822   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 11:31:25.066238   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 11:31:25.067275   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 11:31:25.094521   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 11:31:25.096473   14046 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 11:31:25.096494   14046 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 11:31:25.120768   14046 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 11:31:25.120785   14046 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 11:31:25.127040   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 11:31:25.127059   14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 11:31:25.143070   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 11:31:25.211041   14046 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 11:31:25.211067   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 11:31:25.224348   14046 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 11:31:25.224377   14046 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 11:31:25.306922   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 11:31:25.306951   14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 11:31:25.312328   14046 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 11:31:25.312353   14046 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 11:31:25.354346   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 11:31:25.367695   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 11:31:25.440430   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 11:31:25.489263   14046 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 11:31:25.489288   14046 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 11:31:25.494108   14046 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 11:31:25.494123   14046 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 11:31:25.505892   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 11:31:25.505913   14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 11:31:25.736316   14046 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 11:31:25.736339   14046 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 11:31:25.747519   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 11:31:25.747550   14046 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 11:31:25.785153   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 11:31:25.785175   14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 11:31:26.043257   14046 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 11:31:26.043281   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 11:31:26.066545   14046 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 11:31:26.066566   14046 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 11:31:26.144474   14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 11:31:26.144499   14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 11:31:26.257832   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 11:31:26.275811   14046 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 11:31:26.275838   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 11:31:26.434738   14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 11:31:26.434762   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 11:31:26.548682   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 11:31:26.657205   14046 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.81798875s)
	I1202 11:31:26.657245   14046 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1202 11:31:26.856310   14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 11:31:26.856338   14046 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 11:31:26.953933   14046 pod_ready.go:103] pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:27.167644   14046 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-093588" context rescaled to 1 replicas
	I1202 11:31:27.259915   14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 11:31:27.259936   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 11:31:27.322490   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.463722869s)
	I1202 11:31:27.322546   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:27.322564   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:27.322869   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:27.322892   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:27.322905   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:27.322920   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:27.322928   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:27.323166   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:27.323183   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:27.562371   14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 11:31:27.562393   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 11:31:27.852297   14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 11:31:27.852375   14046 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1202 11:31:28.172709   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 11:31:28.882091   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.922291512s)
	I1202 11:31:28.882152   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:28.882167   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:28.882444   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:28.882462   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:28.882477   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:28.882489   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:28.882787   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:28.882836   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.144693   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.170258844s)
	I1202 11:31:29.144737   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.079888384s)
	I1202 11:31:29.144756   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.144770   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.144799   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.078529681s)
	I1202 11:31:29.144757   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.144846   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.144852   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.07756002s)
	I1202 11:31:29.144846   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.144869   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.144875   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.144879   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.145322   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145331   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145341   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145346   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145353   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.145349   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.145356   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.145375   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.145388   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145396   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145403   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.145410   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.145364   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.145442   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145449   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145457   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.145463   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.145509   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.145538   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.145569   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145631   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145731   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145771   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145586   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.145602   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.145796   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.145408   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.145753   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.147160   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.147164   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.147221   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.196163   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.196191   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.196457   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.196478   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	W1202 11:31:29.196562   14046 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1202 11:31:29.226575   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.226598   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.226874   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:29.226906   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.226912   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.494215   14046 pod_ready.go:103] pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:29.630127   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.535563804s)
	I1202 11:31:29.630190   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.630203   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.630516   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.630567   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.630585   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:29.630598   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:29.630831   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:29.630845   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:29.630876   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:31.078617   14046 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:31.078643   14046 pod_ready.go:82] duration metric: took 6.133804282s for pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:31.078656   14046 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sh425" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:31.604453   14046 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 11:31:31.604494   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:31.607456   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:31.607858   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:31.607881   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:31.608126   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:31.608355   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:31.608517   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:31.608723   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:32.118212   14046 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 11:31:32.349435   14046 addons.go:234] Setting addon gcp-auth=true in "addons-093588"
	I1202 11:31:32.349504   14046 host.go:66] Checking if "addons-093588" exists ...
	I1202 11:31:32.349844   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:32.349891   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:32.364178   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I1202 11:31:32.364679   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:32.365165   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:32.365189   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:32.365508   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:32.366128   14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:31:32.366180   14046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:31:32.380001   14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
	I1202 11:31:32.380476   14046 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:31:32.380998   14046 main.go:141] libmachine: Using API Version  1
	I1202 11:31:32.381020   14046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:31:32.381308   14046 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:31:32.381494   14046 main.go:141] libmachine: (addons-093588) Calling .GetState
	I1202 11:31:32.382817   14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
	I1202 11:31:32.382990   14046 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 11:31:32.383016   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
	I1202 11:31:32.385576   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:32.385914   14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
	I1202 11:31:32.385938   14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
	I1202 11:31:32.386042   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
	I1202 11:31:32.386234   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
	I1202 11:31:32.386377   14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
	I1202 11:31:32.386522   14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
	I1202 11:31:32.584928   14046 pod_ready.go:93] pod "coredns-7c65d6cfc9-sh425" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:32.584947   14046 pod_ready.go:82] duration metric: took 1.506285543s for pod "coredns-7c65d6cfc9-sh425" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.584957   14046 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.592458   14046 pod_ready.go:93] pod "etcd-addons-093588" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:32.592477   14046 pod_ready.go:82] duration metric: took 7.514441ms for pod "etcd-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.592489   14046 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.601795   14046 pod_ready.go:93] pod "kube-apiserver-addons-093588" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:32.601819   14046 pod_ready.go:82] duration metric: took 9.321566ms for pod "kube-apiserver-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.601831   14046 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.611257   14046 pod_ready.go:93] pod "kube-controller-manager-addons-093588" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:32.611277   14046 pod_ready.go:82] duration metric: took 9.438391ms for pod "kube-controller-manager-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.611290   14046 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8bqbx" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.628996   14046 pod_ready.go:93] pod "kube-proxy-8bqbx" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:32.629014   14046 pod_ready.go:82] duration metric: took 17.716285ms for pod "kube-proxy-8bqbx" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:32.629025   14046 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:33.216468   14046 pod_ready.go:93] pod "kube-scheduler-addons-093588" in "kube-system" namespace has status "Ready":"True"
	I1202 11:31:33.216492   14046 pod_ready.go:82] duration metric: took 587.459361ms for pod "kube-scheduler-addons-093588" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:33.216500   14046 pod_ready.go:39] duration metric: took 8.279244651s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:31:33.216514   14046 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:31:33.216560   14046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:31:33.790935   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.647826005s)
	I1202 11:31:33.791001   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791005   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.436618964s)
	I1202 11:31:33.791013   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791043   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791055   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791071   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.423342397s)
	I1202 11:31:33.791100   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791119   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791168   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.35070535s)
	I1202 11:31:33.791202   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791213   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791317   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.533440621s)
	W1202 11:31:33.791345   14046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 11:31:33.791371   14046 retry.go:31] will retry after 369.700432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 11:31:33.791378   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791400   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791427   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.242721006s)
	I1202 11:31:33.791438   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791437   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791445   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791446   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.791450   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.791468   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791476   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791488   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791454   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791521   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791456   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791549   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791493   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791527   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791567   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.791575   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791531   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.791584   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791586   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.791593   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.791875   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791887   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791898   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.791905   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791907   14046 addons.go:475] Verifying addon metrics-server=true in "addons-093588"
	I1202 11:31:33.791912   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.791950   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791969   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.791987   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.791994   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.792000   14046 addons.go:475] Verifying addon ingress=true in "addons-093588"
	I1202 11:31:33.792201   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.792244   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.792252   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.792260   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:33.792268   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:33.793186   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:33.793210   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.793216   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.793370   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:33.793378   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:33.793386   14046 addons.go:475] Verifying addon registry=true in "addons-093588"
	I1202 11:31:33.794805   14046 out.go:177] * Verifying ingress addon...
	I1202 11:31:33.794865   14046 out.go:177] * Verifying registry addon...
	I1202 11:31:33.794866   14046 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-093588 service yakd-dashboard -n yakd-dashboard
	
	I1202 11:31:33.797060   14046 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 11:31:33.797129   14046 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 11:31:33.824422   14046 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 11:31:33.824444   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:33.824727   14046 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 11:31:33.824743   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:34.161790   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 11:31:34.343123   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:34.350325   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:34.694164   14046 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.311156085s)
	I1202 11:31:34.694245   14046 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.477671289s)
	I1202 11:31:34.694273   14046 api_server.go:72] duration metric: took 10.278626446s to wait for apiserver process to appear ...
	I1202 11:31:34.694284   14046 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:31:34.694305   14046 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
	I1202 11:31:34.694162   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.521398959s)
	I1202 11:31:34.694468   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:34.694493   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:34.694736   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:34.694753   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:34.694762   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:34.694769   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:34.694740   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:34.694983   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:34.694992   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:34.695001   14046 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-093588"
	I1202 11:31:34.695677   14046 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1202 11:31:34.696349   14046 out.go:177] * Verifying csi-hostpath-driver addon...
	I1202 11:31:34.697823   14046 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 11:31:34.698942   14046 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 11:31:34.698953   14046 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 11:31:34.699027   14046 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 11:31:34.717379   14046 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
	ok
	I1202 11:31:34.733562   14046 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 11:31:34.733577   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:34.734181   14046 api_server.go:141] control plane version: v1.31.2
	I1202 11:31:34.734195   14046 api_server.go:131] duration metric: took 39.902425ms to wait for apiserver health ...
	I1202 11:31:34.734202   14046 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:31:34.772298   14046 system_pods.go:59] 19 kube-system pods found
	I1202 11:31:34.772342   14046 system_pods.go:61] "amd-gpu-device-plugin-9x4xz" [55df6bd8-36c5-4864-8918-ac9425f2f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 11:31:34.772353   14046 system_pods.go:61] "coredns-7c65d6cfc9-5lcqk" [4d7cf83e-5dd7-42fb-982f-a45f12d7a40b] Running
	I1202 11:31:34.772365   14046 system_pods.go:61] "coredns-7c65d6cfc9-sh425" [749fc6c5-7fb8-4660-876f-15b8c46c2e50] Running
	I1202 11:31:34.772376   14046 system_pods.go:61] "csi-hostpath-attacher-0" [9090d43f-db00-4d9f-a761-7e784e7d66e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 11:31:34.772391   14046 system_pods.go:61] "csi-hostpath-resizer-0" [eacac2d8-005d-4f85-aa5f-5ee6725473a4] Pending
	I1202 11:31:34.772405   14046 system_pods.go:61] "csi-hostpathplugin-jtbvg" [5558e993-a5eb-47db-b72e-028a2df87321] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 11:31:34.772416   14046 system_pods.go:61] "etcd-addons-093588" [133711db-b531-4f45-b56d-d479fc0d3bf2] Running
	I1202 11:31:34.772427   14046 system_pods.go:61] "kube-apiserver-addons-093588" [4fa270b4-87bc-41ea-9c7e-d194a6a7a8dd] Running
	I1202 11:31:34.772438   14046 system_pods.go:61] "kube-controller-manager-addons-093588" [b742eb2a-db16-4d33-8520-0bbb9c083127] Running
	I1202 11:31:34.772452   14046 system_pods.go:61] "kube-ingress-dns-minikube" [93d2e4da-4868-4b1e-9718-bcc404d49f31] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 11:31:34.772462   14046 system_pods.go:61] "kube-proxy-8bqbx" [f637fa3b-3c50-489d-b864-5477922486f8] Running
	I1202 11:31:34.772473   14046 system_pods.go:61] "kube-scheduler-addons-093588" [115de73f-014e-43eb-bf1c-4294dc736871] Running
	I1202 11:31:34.772486   14046 system_pods.go:61] "metrics-server-84c5f94fbc-z5r8x" [b4ffaa02-f311-4afa-9113-ac7a8b7b5828] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 11:31:34.772500   14046 system_pods.go:61] "nvidia-device-plugin-daemonset-zprhh" [1292e790-4f25-49e8-a26d-3925b308ef53] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 11:31:34.772515   14046 system_pods.go:61] "registry-66c9cd494c-4dmpv" [4ba754ca-3bc4-4639-bbf2-9d771c422d1f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 11:31:34.772529   14046 system_pods.go:61] "registry-proxy-84nx4" [d2473044-c394-4b78-8583-763661c9c329] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 11:31:34.772544   14046 system_pods.go:61] "snapshot-controller-56fcc65765-5684m" [1b9feacd-f2e4-41f7-abc9-06e472d66f0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 11:31:34.772558   14046 system_pods.go:61] "snapshot-controller-56fcc65765-dj6kc" [ea0e750d-7300-4238-9443-627b04eb650d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 11:31:34.772570   14046 system_pods.go:61] "storage-provisioner" [90465e3b-c05f-4fff-a0f6-c6a8b7703e89] Running
	I1202 11:31:34.772583   14046 system_pods.go:74] duration metric: took 38.374545ms to wait for pod list to return data ...
	I1202 11:31:34.772598   14046 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:31:34.779139   14046 default_sa.go:45] found service account: "default"
	I1202 11:31:34.779155   14046 default_sa.go:55] duration metric: took 6.550708ms for default service account to be created ...
	I1202 11:31:34.779163   14046 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:31:34.807767   14046 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 11:31:34.807791   14046 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 11:31:34.811811   14046 system_pods.go:86] 19 kube-system pods found
	I1202 11:31:34.811834   14046 system_pods.go:89] "amd-gpu-device-plugin-9x4xz" [55df6bd8-36c5-4864-8918-ac9425f2f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 11:31:34.811839   14046 system_pods.go:89] "coredns-7c65d6cfc9-5lcqk" [4d7cf83e-5dd7-42fb-982f-a45f12d7a40b] Running
	I1202 11:31:34.811846   14046 system_pods.go:89] "coredns-7c65d6cfc9-sh425" [749fc6c5-7fb8-4660-876f-15b8c46c2e50] Running
	I1202 11:31:34.811851   14046 system_pods.go:89] "csi-hostpath-attacher-0" [9090d43f-db00-4d9f-a761-7e784e7d66e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1202 11:31:34.811862   14046 system_pods.go:89] "csi-hostpath-resizer-0" [eacac2d8-005d-4f85-aa5f-5ee6725473a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1202 11:31:34.811871   14046 system_pods.go:89] "csi-hostpathplugin-jtbvg" [5558e993-a5eb-47db-b72e-028a2df87321] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1202 11:31:34.811874   14046 system_pods.go:89] "etcd-addons-093588" [133711db-b531-4f45-b56d-d479fc0d3bf2] Running
	I1202 11:31:34.811878   14046 system_pods.go:89] "kube-apiserver-addons-093588" [4fa270b4-87bc-41ea-9c7e-d194a6a7a8dd] Running
	I1202 11:31:34.811882   14046 system_pods.go:89] "kube-controller-manager-addons-093588" [b742eb2a-db16-4d33-8520-0bbb9c083127] Running
	I1202 11:31:34.811890   14046 system_pods.go:89] "kube-ingress-dns-minikube" [93d2e4da-4868-4b1e-9718-bcc404d49f31] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1202 11:31:34.811893   14046 system_pods.go:89] "kube-proxy-8bqbx" [f637fa3b-3c50-489d-b864-5477922486f8] Running
	I1202 11:31:34.811900   14046 system_pods.go:89] "kube-scheduler-addons-093588" [115de73f-014e-43eb-bf1c-4294dc736871] Running
	I1202 11:31:34.811907   14046 system_pods.go:89] "metrics-server-84c5f94fbc-z5r8x" [b4ffaa02-f311-4afa-9113-ac7a8b7b5828] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 11:31:34.811912   14046 system_pods.go:89] "nvidia-device-plugin-daemonset-zprhh" [1292e790-4f25-49e8-a26d-3925b308ef53] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 11:31:34.811920   14046 system_pods.go:89] "registry-66c9cd494c-4dmpv" [4ba754ca-3bc4-4639-bbf2-9d771c422d1f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1202 11:31:34.811925   14046 system_pods.go:89] "registry-proxy-84nx4" [d2473044-c394-4b78-8583-763661c9c329] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1202 11:31:34.811930   14046 system_pods.go:89] "snapshot-controller-56fcc65765-5684m" [1b9feacd-f2e4-41f7-abc9-06e472d66f0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 11:31:34.811935   14046 system_pods.go:89] "snapshot-controller-56fcc65765-dj6kc" [ea0e750d-7300-4238-9443-627b04eb650d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1202 11:31:34.811941   14046 system_pods.go:89] "storage-provisioner" [90465e3b-c05f-4fff-a0f6-c6a8b7703e89] Running
	I1202 11:31:34.811947   14046 system_pods.go:126] duration metric: took 32.779668ms to wait for k8s-apps to be running ...
	I1202 11:31:34.811953   14046 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:31:34.811993   14046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:31:34.814772   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:34.814898   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:34.865148   14046 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 11:31:34.865170   14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 11:31:34.910684   14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 11:31:35.212476   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:35.302270   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:35.306145   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:35.704047   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:35.804040   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:35.804460   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:35.906004   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.744152206s)
	I1202 11:31:35.906055   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:35.906063   14046 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.094047231s)
	I1202 11:31:35.906092   14046 system_svc.go:56] duration metric: took 1.094134923s WaitForService to wait for kubelet
	I1202 11:31:35.906107   14046 kubeadm.go:582] duration metric: took 11.490458054s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:31:35.906141   14046 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:31:35.906072   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:35.906478   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:35.906510   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:35.906522   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:35.906529   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:35.906722   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:35.906735   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:35.909515   14046 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:31:35.909536   14046 node_conditions.go:123] node cpu capacity is 2
	I1202 11:31:35.909545   14046 node_conditions.go:105] duration metric: took 3.397157ms to run NodePressure ...
	I1202 11:31:35.909555   14046 start.go:241] waiting for startup goroutines ...
	I1202 11:31:36.207546   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:36.311696   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:36.323552   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:36.524594   14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.613849544s)
	I1202 11:31:36.524666   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:36.524682   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:36.525003   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:36.525022   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:36.525036   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:36.525064   14046 main.go:141] libmachine: Making call to close driver server
	I1202 11:31:36.525075   14046 main.go:141] libmachine: (addons-093588) Calling .Close
	I1202 11:31:36.525318   14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
	I1202 11:31:36.525334   14046 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:31:36.525348   14046 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:31:36.526230   14046 addons.go:475] Verifying addon gcp-auth=true in "addons-093588"
	I1202 11:31:36.528737   14046 out.go:177] * Verifying gcp-auth addon...
	I1202 11:31:36.530986   14046 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 11:31:36.578001   14046 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 11:31:36.578020   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:36.704649   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:36.809208   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:36.809895   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:37.037424   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:37.203141   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:37.301723   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:37.302535   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:37.535104   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:37.703267   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:37.802335   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:37.802610   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:38.036909   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:38.204479   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:38.301632   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:38.302255   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:38.534810   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:38.704658   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:38.802708   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:38.803554   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:39.037174   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:39.292617   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:39.392307   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:39.392645   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:39.535333   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:39.704929   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:39.802557   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:39.803397   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:40.035299   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:40.205429   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:40.301785   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:40.301851   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:40.535337   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:40.703275   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:40.800655   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:40.801812   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:41.034994   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:41.204157   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:41.302831   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:41.303262   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:41.535151   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:41.703985   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:41.801319   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:41.801446   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:42.034352   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:42.203443   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:42.302890   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:42.304166   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:42.535013   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:42.703286   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:42.800816   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:42.801395   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:43.035672   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:43.203886   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:43.300980   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:43.301410   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:43.535388   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:43.704078   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:43.801008   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:43.801871   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:44.035750   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:44.241245   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:44.303030   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:44.303402   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:44.535189   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:44.704145   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:44.802535   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:44.803477   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:45.035547   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:45.205121   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:45.302246   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:45.306235   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:45.534465   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:45.703630   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:45.801940   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:45.802281   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:46.035662   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:46.203259   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:46.302067   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:46.302106   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:46.534762   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:46.703700   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:46.800864   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:46.802040   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:47.036727   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:47.204080   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:47.301844   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:47.301978   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:47.534983   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:47.704106   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:47.801707   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:47.803397   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:48.035137   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:48.203099   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:48.301547   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:48.301783   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:48.533891   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:48.703958   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:48.800958   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:48.801440   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:49.034561   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:49.204427   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:49.300634   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:49.301040   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:49.796093   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:49.796650   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:49.894974   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:49.895409   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:50.035131   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:50.205221   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:50.303043   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:50.303481   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:50.534978   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:50.704273   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:50.801772   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:50.801913   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:51.036221   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:51.202958   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:51.301672   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:51.303883   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:51.535974   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:51.705307   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:51.801763   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:51.802054   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:52.034979   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:52.204086   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:52.304301   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:52.305641   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:52.535427   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:52.704315   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:52.802423   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:52.802894   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:53.034594   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:53.204339   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:53.303653   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:53.306254   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:53.535883   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:53.704290   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:53.801531   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:53.802072   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:54.117303   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:54.203910   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:54.302087   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:54.302794   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:54.535306   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:54.703953   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:54.801915   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:54.801935   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:55.035228   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:55.203582   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:55.301814   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:55.302766   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:55.534254   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:55.703526   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:55.801462   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:55.801784   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:56.034736   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:56.204957   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:56.302824   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:56.303171   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:56.535416   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:56.704209   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:56.800476   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:56.802007   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:57.034734   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:57.204149   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:57.301587   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:57.302347   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:57.534833   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:57.704817   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:57.802147   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:57.802493   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:58.034493   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:58.203588   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:58.301828   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:58.302488   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:58.534315   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:58.705874   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:58.801208   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:58.802117   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:59.035206   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:59.204016   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:59.300680   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:59.301228   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:59.534267   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:59.703462   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:59.802411   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:59.805743   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:00.034944   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:00.205868   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:00.302403   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:00.302619   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:00.535930   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:00.705347   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:00.802373   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:00.802691   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:01.034165   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:01.203083   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:01.302108   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:01.302231   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:01.534962   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:01.704177   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:01.800790   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:01.801125   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:02.035522   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:02.207255   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:02.305529   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:02.305891   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:02.535277   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:02.703940   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:02.801885   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:02.801903   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:03.035451   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:03.203573   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:03.302065   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:03.302261   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:03.535720   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:03.703935   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:03.800844   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:03.801307   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:04.035517   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:04.209494   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:04.301432   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:04.302504   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:04.534911   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:04.703576   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:04.803619   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:04.804099   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:05.037027   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:05.204348   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:05.304406   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:05.305143   14046 kapi.go:107] duration metric: took 31.508010049s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 11:32:05.539056   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:05.704700   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:05.804304   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:06.039817   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:06.205353   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:06.310095   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:06.534977   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:06.704090   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:06.800726   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:07.035759   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:07.204852   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:07.301177   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:07.534942   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:07.703430   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:07.801253   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:08.035545   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:08.203485   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:08.304272   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:08.535354   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:08.703653   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:08.801345   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:09.035283   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:09.203667   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:09.301315   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:09.534575   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:09.708677   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:09.801812   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:10.034861   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:10.204571   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:10.685014   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:10.785858   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:10.786536   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:10.800928   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:11.034660   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:11.203914   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:11.303391   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:11.535680   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:11.704751   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:11.805498   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:12.043914   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:12.203937   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:12.301289   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:12.536468   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:13.048324   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:13.048675   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:13.048713   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:13.206976   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:13.306351   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:13.535323   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:13.704264   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:13.804182   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:14.035842   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:14.208917   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:14.301365   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:14.535026   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:14.703588   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:14.801725   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:15.034610   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:15.204327   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:15.304934   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:15.534739   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:15.704785   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:15.801778   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:16.034504   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:16.204196   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:16.630650   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:16.632171   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:16.703056   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:16.801188   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:17.034638   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:17.204193   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:17.305590   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:17.537824   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:17.703501   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:17.801783   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:18.274930   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:18.277014   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:18.324560   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:18.536509   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:18.704072   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:18.801749   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:19.036866   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:19.203700   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:19.305338   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:19.534946   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:19.703543   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:19.801503   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:20.033851   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:20.204394   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:20.301489   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:20.534043   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:20.704035   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:20.802048   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:21.035351   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:21.204075   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:21.304623   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:21.534698   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:21.703740   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:21.800941   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:22.035176   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:22.204538   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:22.303225   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:22.535611   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:22.703682   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:22.802117   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:23.379807   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:23.382707   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:23.383795   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:23.537984   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:23.707670   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:23.801120   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:24.035076   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:24.205347   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:24.301126   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:24.535567   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:24.703844   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:24.801658   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:25.035126   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:25.205250   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:25.302531   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:25.535923   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:25.703680   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:25.801499   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:26.034235   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:26.204524   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:26.301216   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:26.534899   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:26.703670   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:26.801160   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:27.034705   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:27.209222   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:27.311879   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:27.551203   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:27.706021   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:27.804614   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:28.035342   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:28.203667   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:28.301793   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:28.544354   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:28.711784   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:28.810267   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:29.034649   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:29.204547   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:29.301152   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:29.534413   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:29.704108   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:29.802865   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:30.035779   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:30.204665   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:30.304717   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:30.534685   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:30.703851   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:30.802376   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:31.037512   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:31.544834   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:31.545362   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:31.557069   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:31.706516   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:31.807268   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:32.034741   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:32.204171   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:32.301464   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:32.534454   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:32.704155   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:32.801829   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:33.034795   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:33.203510   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:33.306267   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:33.536390   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:33.708085   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:33.802088   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:34.034963   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:34.204776   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:34.308108   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:34.536044   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:34.703641   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:34.804438   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:35.035343   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:35.203465   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:35.303592   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:35.535810   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:35.709085   14046 kapi.go:107] duration metric: took 1m1.010057933s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 11:32:35.802151   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:36.035498   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:36.301273   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:36.534659   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:36.801419   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:37.035446   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:37.301607   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:37.534705   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:37.803178   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:38.035229   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:38.301283   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:38.535357   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:38.801506   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:39.035756   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:39.303845   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:39.536507   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:39.803141   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:40.035121   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:40.308205   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:40.535897   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:40.803283   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:41.035083   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:41.302929   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:41.534524   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:41.801381   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:42.035509   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:42.301517   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:42.534206   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:42.801696   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:43.363908   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:43.367795   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:43.534145   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:43.801034   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:44.035075   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:44.301680   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:44.535413   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:44.802593   14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:45.036399   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:45.303689   14046 kapi.go:107] duration metric: took 1m11.506622692s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 11:32:45.534723   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:46.035278   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:46.534932   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:47.034975   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:47.535739   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:48.034856   14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:48.535985   14046 kapi.go:107] duration metric: took 1m12.004997488s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 11:32:48.537647   14046 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-093588 cluster.
	I1202 11:32:48.538975   14046 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 11:32:48.540091   14046 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 11:32:48.541177   14046 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, nvidia-device-plugin, default-storageclass, inspektor-gadget, metrics-server, amd-gpu-device-plugin, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1202 11:32:48.542184   14046 addons.go:510] duration metric: took 1m24.126505676s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner nvidia-device-plugin default-storageclass inspektor-gadget metrics-server amd-gpu-device-plugin yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1202 11:32:48.542232   14046 start.go:246] waiting for cluster config update ...
	I1202 11:32:48.542256   14046 start.go:255] writing updated cluster config ...
	I1202 11:32:48.542565   14046 ssh_runner.go:195] Run: rm -f paused
	I1202 11:32:48.592664   14046 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 11:32:48.594409   14046 out.go:177] * Done! kubectl is now configured to use "addons-093588" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.239036055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09ff520b-ec1b-4c0a-8f02-3ebd93185fb7 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.241169889Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2424f4c2-c222-4076-95b3-f3810dc7835f name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.241436988Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-bq9jt,Uid:6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139375394317316,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:36:15.077147569Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&PodSandboxMetadata{Name:nginx,Uid:9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1733139236134573190,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:33:55.813784593Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&PodSandboxMetadata{Name:busybox,Uid:9f6e4744-0d79-497c-83f9-2119471a0df3,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139169488004241,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-83f9-2119471a0df3,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:32:49.179630307Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f8cd12020a861322b
02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-z5r8x,Uid:b4ffaa02-f311-4afa-9113-ac7a8b7b5828,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139090850154614,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:30.233577345Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:727a2ad10b461920698fe35b169776cffd8807d863618b4787992c500f52f387,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-86d989889c-6bbl8,Uid:c2094412-6704-4c4f-8bc7-c21561ad7372,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139089909492415,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,
io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,pod-template-hash: 86d989889c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:29.114140663Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:90465e3b-c05f-4fff-a0f6-c6a8b7703e89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139089310304757,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":
{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-02T11:31:28.880933004Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-9x4xz,Uid:55df6bd8-36c5-4864-8918-ac9425f2f9cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139087295738499,Labels:map[string]string{controller-r
evision-hash: 59cf7d9b45,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:26.978148659Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-sh425,Uid:749fc6c5-7fb8-4660-876f-15b8c46c2e50,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139085427682825,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 202
4-12-02T11:31:24.221453920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&PodSandboxMetadata{Name:kube-proxy-8bqbx,Uid:f637fa3b-3c50-489d-b864-5477922486f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139084010336031,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:23.103682635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94204ef648dac42b0379640042a7c974af9203d300edda9454e6243defccdd64,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-093588,Uid:fb05324ef0da57c6be9879c98c60ce72,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:173313
9073318258743,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fb05324ef0da57c6be9879c98c60ce72,kubernetes.io/config.seen: 2024-12-02T11:31:12.647806322Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&PodSandboxMetadata{Name:etcd-addons-093588,Uid:c463271d0012074285091ad6a9bb5269,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139073315298857,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,tier: control-plane,},Annotations:map[string]string{kubeadm.kube
rnetes.io/etcd.advertise-client-urls: https://192.168.39.203:2379,kubernetes.io/config.hash: c463271d0012074285091ad6a9bb5269,kubernetes.io/config.seen: 2024-12-02T11:31:12.647808573Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-093588,Uid:5a54bf73c0b779fcefc9f9ad61889351,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139073311589878,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.203:8443,kubernetes.io/config.hash: 5a54bf73c0b779fcefc9f9ad61889351,kubernetes.io/config.seen: 2024-12-02T11:31:12.647798299Z,kubernetes.io/config.source: fil
e,},RuntimeHandler:,},&PodSandbox{Id:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-093588,Uid:2bc34c7aba0bd63feec10df99ed16d0b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139073309670964,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2bc34c7aba0bd63feec10df99ed16d0b,kubernetes.io/config.seen: 2024-12-02T11:31:12.647807592Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2424f4c2-c222-4076-95b3-f3810dc7835f name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.242046380Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=671a0489-8b29-482f-a01a-0ee5d964b3ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.242896482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8a3baae-7a0e-4fbc-82c1-7cb428096e7c name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.242961451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8a3baae-7a0e-4fbc-82c1-7cb428096e7c name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.243207911Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733139376384172980,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ac1b9f95162ba75981741aaa49f12158cf043a8fdf9d4744bcf8968c12e5c9,PodSandboxId:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733139238782351328,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e1825b9c515e3ce7597d470b44e8214bc28c9ebaec69cfa21450036896bbd,PodSandboxId:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733139172191173333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-8
3f9-2119471a0df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3350f86d51adaa1294f38432a96d12f58dad3c88cb1b63f53d129a72f079c5a3,PodSandboxId:4f8cd12020a861322b02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733139133148376734,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd688bac9204be3d8b36fc16aa1eee1297e33d7bd568e04857088c350e23ddd2,PodSandboxId:727a2ad10b461920698fe35b169776cffd8807d863618b4787992c500f52f387,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733139125687482505,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ee4c3f1d373dc3b1a44810905158446a9b776b9f7557b488e4222707c7dafb,PodSandboxId:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733139123415093119,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328,PodSandboxId:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733139092690590220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6,PodSandboxId:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173313908
8826417039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7,PodSandboxId:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139084116954134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630,PodSandboxId:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139073515107013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55,PodSandboxId:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139073495156905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c,PodSandboxId:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:949
9c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139073507420703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96,PodSandboxId:94204ef648dac42b0379640042a7c974af9203d300edda9454e6243defccdd64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1
b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139073500988739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8a3baae-7a0e-4fbc-82c1-7cb428096e7c name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.243246163Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139537243224548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=671a0489-8b29-482f-a01a-0ee5d964b3ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.243876968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4dc90a87-40b8-4c3f-ba09-ac813e780a22 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.243924983Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4dc90a87-40b8-4c3f-ba09-ac813e780a22 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.244163565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733139376384172980,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ac1b9f95162ba75981741aaa49f12158cf043a8fdf9d4744bcf8968c12e5c9,PodSandboxId:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733139238782351328,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e1825b9c515e3ce7597d470b44e8214bc28c9ebaec69cfa21450036896bbd,PodSandboxId:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733139172191173333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-8
3f9-2119471a0df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3350f86d51adaa1294f38432a96d12f58dad3c88cb1b63f53d129a72f079c5a3,PodSandboxId:4f8cd12020a861322b02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733139133148376734,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd688bac9204be3d8b36fc16aa1eee1297e33d7bd568e04857088c350e23ddd2,PodSandboxId:727a2ad10b461920698fe35b169776cffd8807d863618b4787992c500f52f387,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733139125687482505,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ee4c3f1d373dc3b1a44810905158446a9b776b9f7557b488e4222707c7dafb,PodSandboxId:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733139123415093119,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328,PodSandboxId:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733139092690590220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6,PodSandboxId:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173313908
8826417039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7,PodSandboxId:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139084116954134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630,PodSandboxId:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139073515107013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55,PodSandboxId:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139073495156905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c,PodSandboxId:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:949
9c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139073507420703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96,PodSandboxId:94204ef648dac42b0379640042a7c974af9203d300edda9454e6243defccdd64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1
b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139073500988739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4dc90a87-40b8-4c3f-ba09-ac813e780a22 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.282736484Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68cf692a-2a3f-4b92-af9f-e2dc32522b95 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.282820890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68cf692a-2a3f-4b92-af9f-e2dc32522b95 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.283654124Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29f6a746-0807-4659-b8ea-c718de200767 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.284858055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139537284836194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29f6a746-0807-4659-b8ea-c718de200767 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.285410811Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37ff9709-2e9a-447f-97c0-9582f7724753 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.285463251Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37ff9709-2e9a-447f-97c0-9582f7724753 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.285806596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733139376384172980,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ac1b9f95162ba75981741aaa49f12158cf043a8fdf9d4744bcf8968c12e5c9,PodSandboxId:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733139238782351328,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e1825b9c515e3ce7597d470b44e8214bc28c9ebaec69cfa21450036896bbd,PodSandboxId:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733139172191173333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-8
3f9-2119471a0df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3350f86d51adaa1294f38432a96d12f58dad3c88cb1b63f53d129a72f079c5a3,PodSandboxId:4f8cd12020a861322b02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733139133148376734,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd688bac9204be3d8b36fc16aa1eee1297e33d7bd568e04857088c350e23ddd2,PodSandboxId:727a2ad10b461920698fe35b169776cffd8807d863618b4787992c500f52f387,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733139125687482505,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ee4c3f1d373dc3b1a44810905158446a9b776b9f7557b488e4222707c7dafb,PodSandboxId:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733139123415093119,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328,PodSandboxId:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733139092690590220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6,PodSandboxId:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173313908
8826417039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7,PodSandboxId:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139084116954134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630,PodSandboxId:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139073515107013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55,PodSandboxId:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139073495156905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c,PodSandboxId:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:949
9c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139073507420703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96,PodSandboxId:94204ef648dac42b0379640042a7c974af9203d300edda9454e6243defccdd64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1
b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139073500988739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37ff9709-2e9a-447f-97c0-9582f7724753 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.319992579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e649b2c-4b06-4f23-ab02-8556ebb3bafb name=/runtime.v1.RuntimeService/Version
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.320098826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e649b2c-4b06-4f23-ab02-8556ebb3bafb name=/runtime.v1.RuntimeService/Version
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.321209584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5745841f-a9de-4d6e-a492-22f5ec513041 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.322378761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139537322358093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5745841f-a9de-4d6e-a492-22f5ec513041 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.323024583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16459dd8-ecdb-4a0d-9954-e47478a36264 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.323206592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16459dd8-ecdb-4a0d-9954-e47478a36264 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:38:57 addons-093588 crio[664]: time="2024-12-02 11:38:57.323506623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733139376384172980,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ac1b9f95162ba75981741aaa49f12158cf043a8fdf9d4744bcf8968c12e5c9,PodSandboxId:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733139238782351328,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e1825b9c515e3ce7597d470b44e8214bc28c9ebaec69cfa21450036896bbd,PodSandboxId:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733139172191173333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-8
3f9-2119471a0df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3350f86d51adaa1294f38432a96d12f58dad3c88cb1b63f53d129a72f079c5a3,PodSandboxId:4f8cd12020a861322b02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733139133148376734,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd688bac9204be3d8b36fc16aa1eee1297e33d7bd568e04857088c350e23ddd2,PodSandboxId:727a2ad10b461920698fe35b169776cffd8807d863618b4787992c500f52f387,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733139125687482505,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ee4c3f1d373dc3b1a44810905158446a9b776b9f7557b488e4222707c7dafb,PodSandboxId:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733139123415093119,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328,PodSandboxId:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733139092690590220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6,PodSandboxId:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173313908
8826417039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7,PodSandboxId:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139084116954134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630,PodSandboxId:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139073515107013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55,PodSandboxId:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139073495156905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c,PodSandboxId:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:949
9c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139073507420703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96,PodSandboxId:94204ef648dac42b0379640042a7c974af9203d300edda9454e6243defccdd64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1
b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139073500988739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16459dd8-ecdb-4a0d-9954-e47478a36264 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	84b181ee3e257       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   06d534d8ecc02       hello-world-app-55bf9c44b4-bq9jt
	27ac1b9f95162       docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303                         4 minutes ago       Running             nginx                     0                   a4e1abefd1098       nginx
	5c6e1825b9c51       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   c5e32f031e4c7       busybox
	3350f86d51ada       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   6 minutes ago       Running             metrics-server            0                   4f8cd12020a86       metrics-server-84c5f94fbc-z5r8x
	bd688bac9204b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        6 minutes ago       Running             local-path-provisioner    0                   727a2ad10b461       local-path-provisioner-86d989889c-6bbl8
	53ee4c3f1d373       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                6 minutes ago       Running             amd-gpu-device-plugin     0                   20efed53273ca       amd-gpu-device-plugin-9x4xz
	777ee197b7d2c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   dadb7aad77d41       storage-provisioner
	2415b4c333fed       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   1140032f7ee0a       coredns-7c65d6cfc9-sh425
	28fe66023dde9       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        7 minutes ago       Running             kube-proxy                0                   db3aa60a35b6c       kube-proxy-8bqbx
	5256bb6e86f1e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   e4ff56ebcc0a5       etcd-addons-093588
	3e083dadde5b1       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        7 minutes ago       Running             kube-apiserver            0                   e2d72d2c0f73b       kube-apiserver-addons-093588
	6bf4cf0d44bb8       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        7 minutes ago       Running             kube-controller-manager   0                   94204ef648dac       kube-controller-manager-addons-093588
	9c587d7cc1d10       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        7 minutes ago       Running             kube-scheduler            0                   7ecb4d3d09f04       kube-scheduler-addons-093588
	
	
	==> coredns [2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6] <==
	[INFO] 10.244.0.22:43188 - 26367 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000109144s
	[INFO] 10.244.0.22:43188 - 3234 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083124s
	[INFO] 10.244.0.22:43188 - 34306 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000126171s
	[INFO] 10.244.0.22:43188 - 56418 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000138479s
	[INFO] 10.244.0.22:58709 - 12937 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075812s
	[INFO] 10.244.0.22:58709 - 37433 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000083296s
	[INFO] 10.244.0.22:58709 - 56353 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077531s
	[INFO] 10.244.0.22:58709 - 65129 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000155421s
	[INFO] 10.244.0.22:58709 - 37161 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000128037s
	[INFO] 10.244.0.22:58709 - 41319 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000093498s
	[INFO] 10.244.0.22:58709 - 40231 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074305s
	[INFO] 10.244.0.22:44804 - 25661 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000094824s
	[INFO] 10.244.0.22:34853 - 63275 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000035776s
	[INFO] 10.244.0.22:34853 - 7041 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000102016s
	[INFO] 10.244.0.22:44804 - 43777 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033843s
	[INFO] 10.244.0.22:44804 - 47454 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040387s
	[INFO] 10.244.0.22:34853 - 22794 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028765s
	[INFO] 10.244.0.22:44804 - 60524 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00008715s
	[INFO] 10.244.0.22:34853 - 50139 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042029s
	[INFO] 10.244.0.22:44804 - 4754 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000106476s
	[INFO] 10.244.0.22:34853 - 34897 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000253856s
	[INFO] 10.244.0.22:34853 - 60622 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000181075s
	[INFO] 10.244.0.22:44804 - 18960 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000324825s
	[INFO] 10.244.0.22:44804 - 50155 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000238638s
	[INFO] 10.244.0.22:34853 - 26211 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000099507s
	
	
	==> describe nodes <==
	Name:               addons-093588
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-093588
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=addons-093588
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T11_31_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-093588
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:31:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-093588
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:38:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:36:24 +0000   Mon, 02 Dec 2024 11:31:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:36:24 +0000   Mon, 02 Dec 2024 11:31:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:36:24 +0000   Mon, 02 Dec 2024 11:31:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:36:24 +0000   Mon, 02 Dec 2024 11:31:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.203
	  Hostname:    addons-093588
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b981ec46e284c639f1b7adc8d382e1a
	  System UUID:                0b981ec4-6e28-4c63-9f1b-7adc8d382e1a
	  Boot ID:                    df4ffb50-8889-4ff6-ab14-5cfc93566331
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     hello-world-app-55bf9c44b4-bq9jt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 amd-gpu-device-plugin-9x4xz                0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m31s
	  kube-system                 coredns-7c65d6cfc9-sh425                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m33s
	  kube-system                 etcd-addons-093588                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m38s
	  kube-system                 kube-apiserver-addons-093588               250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m38s
	  kube-system                 kube-controller-manager-addons-093588      200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m38s
	  kube-system                 kube-proxy-8bqbx                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 kube-scheduler-addons-093588               100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m38s
	  kube-system                 metrics-server-84c5f94fbc-z5r8x            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m27s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m29s
	  local-path-storage          local-path-provisioner-86d989889c-6bbl8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m33s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m45s (x8 over 7m45s)  kubelet          Node addons-093588 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m45s (x8 over 7m45s)  kubelet          Node addons-093588 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m45s (x7 over 7m45s)  kubelet          Node addons-093588 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m39s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m38s                  kubelet          Node addons-093588 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s                  kubelet          Node addons-093588 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s                  kubelet          Node addons-093588 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m38s                  kubelet          Node addons-093588 status is now: NodeReady
	  Normal  RegisteredNode           7m34s                  node-controller  Node addons-093588 event: Registered Node addons-093588 in Controller
	
	
	==> dmesg <==
	[  +0.076788] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.295485] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.458763] systemd-fstab-generator[1464]: Ignoring "noauto" option for root device
	[  +4.667325] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.157468] kauditd_printk_skb: 127 callbacks suppressed
	[  +6.980309] kauditd_printk_skb: 100 callbacks suppressed
	[Dec 2 11:32] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.999753] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.944183] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.256346] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.021267] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.144991] kauditd_printk_skb: 49 callbacks suppressed
	[  +6.049857] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.986101] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.155001] kauditd_printk_skb: 14 callbacks suppressed
	[Dec 2 11:33] kauditd_printk_skb: 1 callbacks suppressed
	[ +21.174772] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.028933] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.099758] kauditd_printk_skb: 62 callbacks suppressed
	[ +15.240942] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.362197] kauditd_printk_skb: 9 callbacks suppressed
	[Dec 2 11:34] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.790044] kauditd_printk_skb: 49 callbacks suppressed
	[Dec 2 11:36] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.052814] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630] <==
	{"level":"info","ts":"2024-12-02T11:32:31.509031Z","caller":"traceutil/trace.go:171","msg":"trace[1734125650] transaction","detail":"{read_only:false; response_revision:1081; number_of_response:1; }","duration":"413.800042ms","start":"2024-12-02T11:32:31.095224Z","end":"2024-12-02T11:32:31.509024Z","steps":["trace[1734125650] 'process raft request'  (duration: 412.556295ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:32:31.509318Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:32:31.095216Z","time spent":"413.887629ms","remote":"127.0.0.1:54862","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3133,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" mod_revision:837 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" > >"}
	{"level":"info","ts":"2024-12-02T11:32:31.509910Z","caller":"traceutil/trace.go:171","msg":"trace[1769726754] linearizableReadLoop","detail":"{readStateIndex:1110; appliedIndex:1108; }","duration":"317.515372ms","start":"2024-12-02T11:32:31.190760Z","end":"2024-12-02T11:32:31.508275Z","steps":["trace[1769726754] 'read index received'  (duration: 316.176707ms)","trace[1769726754] 'applied index is now lower than readState.Index'  (duration: 1.338186ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-02T11:32:31.510039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"319.348778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-02T11:32:31.510230Z","caller":"traceutil/trace.go:171","msg":"trace[1905010634] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1081; }","duration":"319.54177ms","start":"2024-12-02T11:32:31.190681Z","end":"2024-12-02T11:32:31.510223Z","steps":["trace[1905010634] 'agreement among raft nodes before linearized reading'  (duration: 319.33211ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:32:31.510347Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:32:31.190637Z","time spent":"319.701654ms","remote":"127.0.0.1:54810","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-02T11:32:31.510884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.331323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/yakd-dashboard-67d98fc6b-hsrwm\" ","response":"range_response_count:1 size:4581"}
	{"level":"info","ts":"2024-12-02T11:32:31.510999Z","caller":"traceutil/trace.go:171","msg":"trace[1506151935] range","detail":"{range_begin:/registry/pods/yakd-dashboard/yakd-dashboard-67d98fc6b-hsrwm; range_end:; response_count:1; response_revision:1081; }","duration":"290.452941ms","start":"2024-12-02T11:32:31.220538Z","end":"2024-12-02T11:32:31.510991Z","steps":["trace[1506151935] 'agreement among raft nodes before linearized reading'  (duration: 290.098187ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:32:31.511638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.271954ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-02T11:32:31.511777Z","caller":"traceutil/trace.go:171","msg":"trace[41204636] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1081; }","duration":"223.410853ms","start":"2024-12-02T11:32:31.288359Z","end":"2024-12-02T11:32:31.511769Z","steps":["trace[41204636] 'agreement among raft nodes before linearized reading'  (duration: 223.263454ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:32:40.977312Z","caller":"traceutil/trace.go:171","msg":"trace[1810761727] transaction","detail":"{read_only:false; response_revision:1123; number_of_response:1; }","duration":"124.053614ms","start":"2024-12-02T11:32:40.853243Z","end":"2024-12-02T11:32:40.977297Z","steps":["trace[1810761727] 'process raft request'  (duration: 123.593438ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:32:43.349291Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.790942ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3888176061930468756 > lease_revoke:<id:35f59387238af47d>","response":"size:27"}
	{"level":"info","ts":"2024-12-02T11:32:43.349565Z","caller":"traceutil/trace.go:171","msg":"trace[978203883] linearizableReadLoop","detail":"{readStateIndex:1159; appliedIndex:1158; }","duration":"326.46397ms","start":"2024-12-02T11:32:43.023087Z","end":"2024-12-02T11:32:43.349551Z","steps":["trace[978203883] 'read index received'  (duration: 176.281029ms)","trace[978203883] 'applied index is now lower than readState.Index'  (duration: 150.108422ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-02T11:32:43.349677Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.516213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-02T11:32:43.349811Z","caller":"traceutil/trace.go:171","msg":"trace[1556617837] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1127; }","duration":"326.718412ms","start":"2024-12-02T11:32:43.023082Z","end":"2024-12-02T11:32:43.349800Z","steps":["trace[1556617837] 'agreement among raft nodes before linearized reading'  (duration: 326.493012ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:32:43.349891Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:32:43.023040Z","time spent":"326.836999ms","remote":"127.0.0.1:54810","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-12-02T11:33:25.193945Z","caller":"traceutil/trace.go:171","msg":"trace[1208210779] linearizableReadLoop","detail":"{readStateIndex:1400; appliedIndex:1399; }","duration":"208.352199ms","start":"2024-12-02T11:33:24.985579Z","end":"2024-12-02T11:33:25.193931Z","steps":["trace[1208210779] 'read index received'  (duration: 208.251949ms)","trace[1208210779] 'applied index is now lower than readState.Index'  (duration: 99.656µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-02T11:33:25.194034Z","caller":"traceutil/trace.go:171","msg":"trace[585938922] transaction","detail":"{read_only:false; response_revision:1356; number_of_response:1; }","duration":"373.401906ms","start":"2024-12-02T11:33:24.820626Z","end":"2024-12-02T11:33:25.194028Z","steps":["trace[585938922] 'process raft request'  (duration: 373.198624ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:33:25.194112Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:33:24.820609Z","time spent":"373.443109ms","remote":"127.0.0.1:54810","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4248,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1\" mod_revision:1354 > success:<request_put:<key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1\" value_size:4148 >> failure:<request_range:<key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1\" > >"}
	{"level":"warn","ts":"2024-12-02T11:33:25.194354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.766538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/registry-66c9cd494c-4dmpv.180d58d19a377856\" ","response":"range_response_count:1 size:826"}
	{"level":"info","ts":"2024-12-02T11:33:25.194403Z","caller":"traceutil/trace.go:171","msg":"trace[1712045514] range","detail":"{range_begin:/registry/events/kube-system/registry-66c9cd494c-4dmpv.180d58d19a377856; range_end:; response_count:1; response_revision:1356; }","duration":"208.820614ms","start":"2024-12-02T11:33:24.985574Z","end":"2024-12-02T11:33:25.194395Z","steps":["trace[1712045514] 'agreement among raft nodes before linearized reading'  (duration: 208.618556ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:33:25.194540Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.491693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1\" ","response":"range_response_count:1 size:4263"}
	{"level":"info","ts":"2024-12-02T11:33:25.194787Z","caller":"traceutil/trace.go:171","msg":"trace[1843945180] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1; range_end:; response_count:1; response_revision:1356; }","duration":"187.741861ms","start":"2024-12-02T11:33:25.007033Z","end":"2024-12-02T11:33:25.194775Z","steps":["trace[1843945180] 'agreement among raft nodes before linearized reading'  (duration: 187.326707ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:33:46.232089Z","caller":"traceutil/trace.go:171","msg":"trace[564767909] transaction","detail":"{read_only:false; response_revision:1529; number_of_response:1; }","duration":"323.809998ms","start":"2024-12-02T11:33:45.908266Z","end":"2024-12-02T11:33:46.232076Z","steps":["trace[564767909] 'process raft request'  (duration: 323.483126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:33:46.232222Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:33:45.908251Z","time spent":"323.913932ms","remote":"127.0.0.1:54798","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1524 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 11:38:57 up 8 min,  0 users,  load average: 0.25, 0.67, 0.49
	Linux addons-093588 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1202 11:33:18.047549       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.92.16:443: connect: connection refused" logger="UnhandledError"
	E1202 11:33:18.054141       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.92.16:443: connect: connection refused" logger="UnhandledError"
	E1202 11:33:18.078468       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.92.16:443: connect: connection refused" logger="UnhandledError"
	I1202 11:33:18.164928       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1202 11:33:20.557130       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.110.149"}
	I1202 11:33:50.188346       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1202 11:33:51.225296       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1202 11:33:53.732475       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1202 11:33:55.653397       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1202 11:33:55.870562       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.159.13"}
	I1202 11:34:08.733165       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:08.733224       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:08.777393       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:08.778616       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:08.789293       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:08.789434       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:08.791466       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:08.791539       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:08.934431       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	W1202 11:34:09.792203       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1202 11:34:09.937879       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1202 11:34:09.937904       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1202 11:36:15.262680       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.26.244"}
	
	
	==> kube-controller-manager [6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96] <==
	E1202 11:36:48.122668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:36:58.446503       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:36:58.446561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:37:16.262827       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:37:16.262889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:37:18.588403       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:37:18.588435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:37:26.687386       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:37:26.687435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:37:31.227949       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:37:31.228071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:38:03.236908       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:38:03.237051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:38:07.141282       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:38:07.141426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:38:08.797085       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:38:08.797251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:38:13.468512       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:38:13.468744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:38:38.744244       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:38:38.744353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:38:39.705520       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:38:39.705578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:38:54.676244       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:38:54.676303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1202 11:31:24.341304       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1202 11:31:24.349628       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
	E1202 11:31:24.349847       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 11:31:24.516756       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1202 11:31:24.516794       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 11:31:24.516840       1 server_linux.go:169] "Using iptables Proxier"
	I1202 11:31:24.521066       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 11:31:24.521669       1 server.go:483] "Version info" version="v1.31.2"
	I1202 11:31:24.521810       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 11:31:24.544631       1 config.go:199] "Starting service config controller"
	I1202 11:31:24.544656       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 11:31:24.544760       1 config.go:105] "Starting endpoint slice config controller"
	I1202 11:31:24.544767       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 11:31:24.546593       1 config.go:328] "Starting node config controller"
	I1202 11:31:24.555031       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 11:31:24.644874       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 11:31:24.644939       1 shared_informer.go:320] Caches are synced for service config
	I1202 11:31:24.659340       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55] <==
	W1202 11:31:16.359632       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:16.359683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:16.359821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1202 11:31:16.359854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:16.359953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 11:31:16.359994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.170036       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 11:31:17.170071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.175313       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1202 11:31:17.175385       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.343360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1202 11:31:17.343518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.423570       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:17.424077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.477506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 11:31:17.477537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.558906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:17.559078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.571603       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1202 11:31:17.571679       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1202 11:31:17.575523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:17.576010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.576175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1202 11:31:17.576213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1202 11:31:19.948258       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 11:37:29 addons-093588 kubelet[1213]: E1202 11:37:29.203953    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139449203350685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:29 addons-093588 kubelet[1213]: E1202 11:37:29.203996    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139449203350685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:29 addons-093588 kubelet[1213]: I1202 11:37:29.980767    1213 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-9x4xz" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 11:37:39 addons-093588 kubelet[1213]: E1202 11:37:39.206760    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139459206344458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:39 addons-093588 kubelet[1213]: E1202 11:37:39.206797    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139459206344458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:49 addons-093588 kubelet[1213]: E1202 11:37:49.211314    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139469210628567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:49 addons-093588 kubelet[1213]: E1202 11:37:49.211411    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139469210628567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:59 addons-093588 kubelet[1213]: E1202 11:37:59.213840    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139479213420851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:59 addons-093588 kubelet[1213]: E1202 11:37:59.214314    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139479213420851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:03 addons-093588 kubelet[1213]: I1202 11:38:03.980213    1213 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 11:38:09 addons-093588 kubelet[1213]: E1202 11:38:09.216518    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139489216257058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:09 addons-093588 kubelet[1213]: E1202 11:38:09.216573    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139489216257058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:19 addons-093588 kubelet[1213]: E1202 11:38:19.005327    1213 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 02 11:38:19 addons-093588 kubelet[1213]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 11:38:19 addons-093588 kubelet[1213]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 11:38:19 addons-093588 kubelet[1213]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 11:38:19 addons-093588 kubelet[1213]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 11:38:19 addons-093588 kubelet[1213]: E1202 11:38:19.218756    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139499218401277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:19 addons-093588 kubelet[1213]: E1202 11:38:19.218777    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139499218401277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:29 addons-093588 kubelet[1213]: E1202 11:38:29.221907    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139509221381577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:29 addons-093588 kubelet[1213]: E1202 11:38:29.222189    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139509221381577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:39 addons-093588 kubelet[1213]: E1202 11:38:39.224927    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139519224574967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:39 addons-093588 kubelet[1213]: E1202 11:38:39.224993    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139519224574967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:49 addons-093588 kubelet[1213]: E1202 11:38:49.226988    1213 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139529226598378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:49 addons-093588 kubelet[1213]: E1202 11:38:49.227424    1213 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139529226598378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328] <==
	I1202 11:31:33.446653       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 11:31:33.475001       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 11:31:33.475232       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 11:31:33.571674       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 11:31:33.584238       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-093588_db6d9f13-b66b-4ee3-98aa-9e1906833c9b!
	I1202 11:31:33.585297       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a34dd670-d034-4d97-b122-ad1727e6d2ec", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-093588_db6d9f13-b66b-4ee3-98aa-9e1906833c9b became leader
	I1202 11:31:33.684810       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-093588_db6d9f13-b66b-4ee3-98aa-9e1906833c9b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-093588 -n addons-093588
helpers_test.go:261: (dbg) Run:  kubectl --context addons-093588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (328.33s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-093588
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-093588: exit status 82 (2m0.440537162s)

                                                
                                                
-- stdout --
	* Stopping node "addons-093588"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-093588" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-093588
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-093588: exit status 11 (21.459047621s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-093588" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-093588
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-093588: exit status 11 (6.144492184s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-093588" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-093588
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-093588: exit status 11 (6.144188675s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.203:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-093588" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (12.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-054639 ssh pgrep buildkitd: exit status 1 (254.293474ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image build -t localhost/my-image:functional-054639 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-054639 image build -t localhost/my-image:functional-054639 testdata/build --alsologtostderr: (9.674847099s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-054639 image build -t localhost/my-image:functional-054639 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0557b01da3f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-054639
--> 8d35dd1ba18
Successfully tagged localhost/my-image:functional-054639
8d35dd1ba18dcf28bfff63b253595f7efa03c5f282cb8b51467c987b6b92aaec
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-054639 image build -t localhost/my-image:functional-054639 testdata/build --alsologtostderr:
I1202 11:45:21.920882   22892 out.go:345] Setting OutFile to fd 1 ...
I1202 11:45:21.921055   22892 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:45:21.921066   22892 out.go:358] Setting ErrFile to fd 2...
I1202 11:45:21.921072   22892 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:45:21.921332   22892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
I1202 11:45:21.921906   22892 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:45:21.922406   22892 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:45:21.924850   22892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:45:21.924904   22892 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:45:21.941032   22892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
I1202 11:45:21.941522   22892 main.go:141] libmachine: () Calling .GetVersion
I1202 11:45:21.942081   22892 main.go:141] libmachine: Using API Version  1
I1202 11:45:21.942113   22892 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:45:21.942509   22892 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:45:21.942716   22892 main.go:141] libmachine: (functional-054639) Calling .GetState
I1202 11:45:21.944804   22892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:45:21.944853   22892 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:45:21.959509   22892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
I1202 11:45:21.959925   22892 main.go:141] libmachine: () Calling .GetVersion
I1202 11:45:21.960450   22892 main.go:141] libmachine: Using API Version  1
I1202 11:45:21.960478   22892 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:45:21.960793   22892 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:45:21.960994   22892 main.go:141] libmachine: (functional-054639) Calling .DriverName
I1202 11:45:21.961212   22892 ssh_runner.go:195] Run: systemctl --version
I1202 11:45:21.961244   22892 main.go:141] libmachine: (functional-054639) Calling .GetSSHHostname
I1202 11:45:21.964204   22892 main.go:141] libmachine: (functional-054639) DBG | domain functional-054639 has defined MAC address 52:54:00:75:4f:3f in network mk-functional-054639
I1202 11:45:21.964607   22892 main.go:141] libmachine: (functional-054639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:4f:3f", ip: ""} in network mk-functional-054639: {Iface:virbr1 ExpiryTime:2024-12-02 12:42:44 +0000 UTC Type:0 Mac:52:54:00:75:4f:3f Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-054639 Clientid:01:52:54:00:75:4f:3f}
I1202 11:45:21.964636   22892 main.go:141] libmachine: (functional-054639) DBG | domain functional-054639 has defined IP address 192.168.39.77 and MAC address 52:54:00:75:4f:3f in network mk-functional-054639
I1202 11:45:21.964785   22892 main.go:141] libmachine: (functional-054639) Calling .GetSSHPort
I1202 11:45:21.964968   22892 main.go:141] libmachine: (functional-054639) Calling .GetSSHKeyPath
I1202 11:45:21.965126   22892 main.go:141] libmachine: (functional-054639) Calling .GetSSHUsername
I1202 11:45:21.965286   22892 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/functional-054639/id_rsa Username:docker}
I1202 11:45:22.116263   22892 build_images.go:161] Building image from path: /tmp/build.1088767395.tar
I1202 11:45:22.116331   22892 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 11:45:22.153907   22892 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1088767395.tar
I1202 11:45:22.168919   22892 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1088767395.tar: stat -c "%s %y" /var/lib/minikube/build/build.1088767395.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1088767395.tar': No such file or directory
I1202 11:45:22.168967   22892 ssh_runner.go:362] scp /tmp/build.1088767395.tar --> /var/lib/minikube/build/build.1088767395.tar (3072 bytes)
I1202 11:45:22.259929   22892 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1088767395
I1202 11:45:22.290702   22892 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1088767395 -xf /var/lib/minikube/build/build.1088767395.tar
I1202 11:45:22.315943   22892 crio.go:315] Building image: /var/lib/minikube/build/build.1088767395
I1202 11:45:22.316028   22892 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-054639 /var/lib/minikube/build/build.1088767395 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1202 11:45:31.487043   22892 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-054639 /var/lib/minikube/build/build.1088767395 --cgroup-manager=cgroupfs: (9.170988817s)
I1202 11:45:31.487097   22892 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1088767395
I1202 11:45:31.513238   22892 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1088767395.tar
I1202 11:45:31.535052   22892 build_images.go:217] Built localhost/my-image:functional-054639 from /tmp/build.1088767395.tar
I1202 11:45:31.535081   22892 build_images.go:133] succeeded building to: functional-054639
I1202 11:45:31.535086   22892 build_images.go:134] failed building to: 
I1202 11:45:31.535144   22892 main.go:141] libmachine: Making call to close driver server
I1202 11:45:31.535153   22892 main.go:141] libmachine: (functional-054639) Calling .Close
I1202 11:45:31.535441   22892 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:45:31.535456   22892 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:45:31.535464   22892 main.go:141] libmachine: Making call to close driver server
I1202 11:45:31.535471   22892 main.go:141] libmachine: (functional-054639) Calling .Close
I1202 11:45:31.535691   22892 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:45:31.535712   22892 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image ls
E1202 11:45:33.105357   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-054639 image ls: (2.344935694s)
functional_test.go:446: expected "localhost/my-image:functional-054639" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (12.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 node stop m02 -v=7 --alsologtostderr
E1202 11:50:42.346340   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:51:23.307737   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-604935 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.474461585s)

                                                
                                                
-- stdout --
	* Stopping node "ha-604935-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 11:50:24.482876   27430 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:50:24.483038   27430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:50:24.483049   27430 out.go:358] Setting ErrFile to fd 2...
	I1202 11:50:24.483056   27430 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:50:24.483227   27430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:50:24.483475   27430 mustload.go:65] Loading cluster: ha-604935
	I1202 11:50:24.484017   27430 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:50:24.484047   27430 stop.go:39] StopHost: ha-604935-m02
	I1202 11:50:24.484548   27430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:50:24.484592   27430 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:50:24.499705   27430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I1202 11:50:24.500119   27430 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:50:24.500595   27430 main.go:141] libmachine: Using API Version  1
	I1202 11:50:24.500616   27430 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:50:24.500941   27430 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:50:24.503225   27430 out.go:177] * Stopping node "ha-604935-m02"  ...
	I1202 11:50:24.504369   27430 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1202 11:50:24.504393   27430 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:50:24.504586   27430 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1202 11:50:24.504615   27430 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:50:24.507205   27430 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:50:24.507647   27430 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:50:24.507700   27430 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:50:24.507816   27430 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:50:24.507965   27430 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:50:24.508115   27430 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:50:24.508286   27430 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:50:24.611001   27430 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1202 11:50:24.670364   27430 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1202 11:50:24.726213   27430 main.go:141] libmachine: Stopping "ha-604935-m02"...
	I1202 11:50:24.726244   27430 main.go:141] libmachine: (ha-604935-m02) Calling .GetState
	I1202 11:50:24.727725   27430 main.go:141] libmachine: (ha-604935-m02) Calling .Stop
	I1202 11:50:24.731175   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 0/120
	I1202 11:50:25.732455   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 1/120
	I1202 11:50:26.734477   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 2/120
	I1202 11:50:27.736247   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 3/120
	I1202 11:50:28.737475   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 4/120
	I1202 11:50:29.739225   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 5/120
	I1202 11:50:30.740576   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 6/120
	I1202 11:50:31.741827   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 7/120
	I1202 11:50:32.743802   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 8/120
	I1202 11:50:33.745451   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 9/120
	I1202 11:50:34.747590   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 10/120
	I1202 11:50:35.749197   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 11/120
	I1202 11:50:36.750673   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 12/120
	I1202 11:50:37.751883   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 13/120
	I1202 11:50:38.753107   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 14/120
	I1202 11:50:39.754603   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 15/120
	I1202 11:50:40.756110   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 16/120
	I1202 11:50:41.757461   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 17/120
	I1202 11:50:42.758599   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 18/120
	I1202 11:50:43.760649   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 19/120
	I1202 11:50:44.762620   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 20/120
	I1202 11:50:45.763945   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 21/120
	I1202 11:50:46.765311   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 22/120
	I1202 11:50:47.766744   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 23/120
	I1202 11:50:48.768058   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 24/120
	I1202 11:50:49.769825   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 25/120
	I1202 11:50:50.771153   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 26/120
	I1202 11:50:51.772568   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 27/120
	I1202 11:50:52.773942   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 28/120
	I1202 11:50:53.775272   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 29/120
	I1202 11:50:54.777307   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 30/120
	I1202 11:50:55.778570   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 31/120
	I1202 11:50:56.779726   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 32/120
	I1202 11:50:57.780950   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 33/120
	I1202 11:50:58.782633   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 34/120
	I1202 11:50:59.784476   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 35/120
	I1202 11:51:00.785767   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 36/120
	I1202 11:51:01.787275   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 37/120
	I1202 11:51:02.788476   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 38/120
	I1202 11:51:03.790537   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 39/120
	I1202 11:51:04.792501   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 40/120
	I1202 11:51:05.793711   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 41/120
	I1202 11:51:06.794916   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 42/120
	I1202 11:51:07.796152   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 43/120
	I1202 11:51:08.798081   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 44/120
	I1202 11:51:09.799854   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 45/120
	I1202 11:51:10.801170   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 46/120
	I1202 11:51:11.802748   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 47/120
	I1202 11:51:12.803890   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 48/120
	I1202 11:51:13.804959   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 49/120
	I1202 11:51:14.807087   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 50/120
	I1202 11:51:15.808403   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 51/120
	I1202 11:51:16.810483   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 52/120
	I1202 11:51:17.811838   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 53/120
	I1202 11:51:18.812918   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 54/120
	I1202 11:51:19.814525   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 55/120
	I1202 11:51:20.815615   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 56/120
	I1202 11:51:21.816689   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 57/120
	I1202 11:51:22.817878   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 58/120
	I1202 11:51:23.819460   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 59/120
	I1202 11:51:24.820919   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 60/120
	I1202 11:51:25.822207   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 61/120
	I1202 11:51:26.823574   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 62/120
	I1202 11:51:27.825019   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 63/120
	I1202 11:51:28.826668   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 64/120
	I1202 11:51:29.828009   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 65/120
	I1202 11:51:30.829378   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 66/120
	I1202 11:51:31.830388   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 67/120
	I1202 11:51:32.831797   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 68/120
	I1202 11:51:33.833168   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 69/120
	I1202 11:51:34.835081   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 70/120
	I1202 11:51:35.836397   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 71/120
	I1202 11:51:36.838641   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 72/120
	I1202 11:51:37.839823   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 73/120
	I1202 11:51:38.841211   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 74/120
	I1202 11:51:39.843190   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 75/120
	I1202 11:51:40.844482   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 76/120
	I1202 11:51:41.846443   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 77/120
	I1202 11:51:42.847802   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 78/120
	I1202 11:51:43.849161   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 79/120
	I1202 11:51:44.851386   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 80/120
	I1202 11:51:45.853562   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 81/120
	I1202 11:51:46.854718   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 82/120
	I1202 11:51:47.856775   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 83/120
	I1202 11:51:48.858555   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 84/120
	I1202 11:51:49.859721   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 85/120
	I1202 11:51:50.860910   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 86/120
	I1202 11:51:51.862564   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 87/120
	I1202 11:51:52.863766   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 88/120
	I1202 11:51:53.865021   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 89/120
	I1202 11:51:54.866259   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 90/120
	I1202 11:51:55.867341   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 91/120
	I1202 11:51:56.868668   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 92/120
	I1202 11:51:57.869817   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 93/120
	I1202 11:51:58.871320   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 94/120
	I1202 11:51:59.873339   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 95/120
	I1202 11:52:00.874471   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 96/120
	I1202 11:52:01.875786   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 97/120
	I1202 11:52:02.877321   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 98/120
	I1202 11:52:03.878643   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 99/120
	I1202 11:52:04.880472   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 100/120
	I1202 11:52:05.882554   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 101/120
	I1202 11:52:06.883876   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 102/120
	I1202 11:52:07.885266   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 103/120
	I1202 11:52:08.886803   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 104/120
	I1202 11:52:09.888669   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 105/120
	I1202 11:52:10.890574   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 106/120
	I1202 11:52:11.891881   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 107/120
	I1202 11:52:12.893214   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 108/120
	I1202 11:52:13.894601   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 109/120
	I1202 11:52:14.896602   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 110/120
	I1202 11:52:15.898565   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 111/120
	I1202 11:52:16.899778   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 112/120
	I1202 11:52:17.901163   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 113/120
	I1202 11:52:18.902520   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 114/120
	I1202 11:52:19.903989   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 115/120
	I1202 11:52:20.906168   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 116/120
	I1202 11:52:21.907350   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 117/120
	I1202 11:52:22.909002   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 118/120
	I1202 11:52:23.910259   27430 main.go:141] libmachine: (ha-604935-m02) Waiting for machine to stop 119/120
	I1202 11:52:24.911581   27430 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1202 11:52:24.911800   27430 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-604935 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr: (18.724742682s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-604935 -n ha-604935
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-604935 logs -n 25: (1.346254157s)
E1202 11:52:45.229832   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1292724683/001/cp-test_ha-604935-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935:/home/docker/cp-test_ha-604935-m03_ha-604935.txt                       |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935 sudo cat                                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935.txt                                 |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m02:/home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m02 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04:/home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m04 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp testdata/cp-test.txt                                                | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1292724683/001/cp-test_ha-604935-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935:/home/docker/cp-test_ha-604935-m04_ha-604935.txt                       |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935 sudo cat                                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935.txt                                 |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m02:/home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m02 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03:/home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m03 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-604935 node stop m02 -v=7                                                     | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:45:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:45:51.477333   23379 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:45:51.477429   23379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:51.477436   23379 out.go:358] Setting ErrFile to fd 2...
	I1202 11:45:51.477440   23379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:51.477579   23379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:45:51.478080   23379 out.go:352] Setting JSON to false
	I1202 11:45:51.478853   23379 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1703,"bootTime":1733138248,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:45:51.478907   23379 start.go:139] virtualization: kvm guest
	I1202 11:45:51.480873   23379 out.go:177] * [ha-604935] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:45:51.482060   23379 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:45:51.482068   23379 notify.go:220] Checking for updates...
	I1202 11:45:51.484245   23379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:45:51.485502   23379 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:45:51.486630   23379 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:51.487842   23379 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:45:51.488928   23379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:45:51.490194   23379 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:45:51.523210   23379 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 11:45:51.524197   23379 start.go:297] selected driver: kvm2
	I1202 11:45:51.524207   23379 start.go:901] validating driver "kvm2" against <nil>
	I1202 11:45:51.524217   23379 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:45:51.524886   23379 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:45:51.524953   23379 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 11:45:51.538752   23379 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 11:45:51.538805   23379 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 11:45:51.539057   23379 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:45:51.539096   23379 cni.go:84] Creating CNI manager for ""
	I1202 11:45:51.539154   23379 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1202 11:45:51.539162   23379 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 11:45:51.539222   23379 start.go:340] cluster config:
	{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1202 11:45:51.539330   23379 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:45:51.540849   23379 out.go:177] * Starting "ha-604935" primary control-plane node in "ha-604935" cluster
	I1202 11:45:51.542035   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:45:51.542064   23379 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:45:51.542073   23379 cache.go:56] Caching tarball of preloaded images
	I1202 11:45:51.542155   23379 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:45:51.542168   23379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:45:51.542474   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:45:51.542495   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json: {Name:mkd56e76e09e18927ad08e110fcb7c73441ee1fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:45:51.542653   23379 start.go:360] acquireMachinesLock for ha-604935: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:45:51.542690   23379 start.go:364] duration metric: took 21.87µs to acquireMachinesLock for "ha-604935"
	I1202 11:45:51.542712   23379 start.go:93] Provisioning new machine with config: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:45:51.542769   23379 start.go:125] createHost starting for "" (driver="kvm2")
	I1202 11:45:51.544215   23379 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 11:45:51.544376   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:45:51.544410   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:45:51.558068   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
	I1202 11:45:51.558542   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:45:51.559117   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:45:51.559144   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:45:51.559441   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:45:51.559624   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:45:51.559747   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:45:51.559887   23379 start.go:159] libmachine.API.Create for "ha-604935" (driver="kvm2")
	I1202 11:45:51.559913   23379 client.go:168] LocalClient.Create starting
	I1202 11:45:51.559938   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:45:51.559978   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:45:51.559999   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:45:51.560059   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:45:51.560086   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:45:51.560103   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:45:51.560134   23379 main.go:141] libmachine: Running pre-create checks...
	I1202 11:45:51.560147   23379 main.go:141] libmachine: (ha-604935) Calling .PreCreateCheck
	I1202 11:45:51.560467   23379 main.go:141] libmachine: (ha-604935) Calling .GetConfigRaw
	I1202 11:45:51.560846   23379 main.go:141] libmachine: Creating machine...
	I1202 11:45:51.560861   23379 main.go:141] libmachine: (ha-604935) Calling .Create
	I1202 11:45:51.560982   23379 main.go:141] libmachine: (ha-604935) Creating KVM machine...
	I1202 11:45:51.562114   23379 main.go:141] libmachine: (ha-604935) DBG | found existing default KVM network
	I1202 11:45:51.562698   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:51.562571   23402 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002231e0}
	I1202 11:45:51.562725   23379 main.go:141] libmachine: (ha-604935) DBG | created network xml: 
	I1202 11:45:51.562738   23379 main.go:141] libmachine: (ha-604935) DBG | <network>
	I1202 11:45:51.562750   23379 main.go:141] libmachine: (ha-604935) DBG |   <name>mk-ha-604935</name>
	I1202 11:45:51.562762   23379 main.go:141] libmachine: (ha-604935) DBG |   <dns enable='no'/>
	I1202 11:45:51.562773   23379 main.go:141] libmachine: (ha-604935) DBG |   
	I1202 11:45:51.562781   23379 main.go:141] libmachine: (ha-604935) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1202 11:45:51.562793   23379 main.go:141] libmachine: (ha-604935) DBG |     <dhcp>
	I1202 11:45:51.562803   23379 main.go:141] libmachine: (ha-604935) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1202 11:45:51.562814   23379 main.go:141] libmachine: (ha-604935) DBG |     </dhcp>
	I1202 11:45:51.562827   23379 main.go:141] libmachine: (ha-604935) DBG |   </ip>
	I1202 11:45:51.562839   23379 main.go:141] libmachine: (ha-604935) DBG |   
	I1202 11:45:51.562849   23379 main.go:141] libmachine: (ha-604935) DBG | </network>
	I1202 11:45:51.562861   23379 main.go:141] libmachine: (ha-604935) DBG | 
	I1202 11:45:51.567359   23379 main.go:141] libmachine: (ha-604935) DBG | trying to create private KVM network mk-ha-604935 192.168.39.0/24...
	I1202 11:45:51.627851   23379 main.go:141] libmachine: (ha-604935) DBG | private KVM network mk-ha-604935 192.168.39.0/24 created
	I1202 11:45:51.627878   23379 main.go:141] libmachine: (ha-604935) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935 ...
	I1202 11:45:51.627909   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:51.627845   23402 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:51.627936   23379 main.go:141] libmachine: (ha-604935) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:45:51.627956   23379 main.go:141] libmachine: (ha-604935) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:45:51.873906   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:51.873783   23402 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa...
	I1202 11:45:52.258389   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:52.258298   23402 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/ha-604935.rawdisk...
	I1202 11:45:52.258412   23379 main.go:141] libmachine: (ha-604935) DBG | Writing magic tar header
	I1202 11:45:52.258421   23379 main.go:141] libmachine: (ha-604935) DBG | Writing SSH key tar header
	I1202 11:45:52.258433   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:52.258404   23402 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935 ...
	I1202 11:45:52.258549   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935
	I1202 11:45:52.258587   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:45:52.258600   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935 (perms=drwx------)
	I1202 11:45:52.258612   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:45:52.258622   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:45:52.258639   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:45:52.258670   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:45:52.258686   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:52.258699   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:45:52.258711   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:45:52.258726   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:45:52.258742   23379 main.go:141] libmachine: (ha-604935) Creating domain...
	I1202 11:45:52.258748   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:45:52.258755   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home
	I1202 11:45:52.258760   23379 main.go:141] libmachine: (ha-604935) DBG | Skipping /home - not owner
	I1202 11:45:52.259679   23379 main.go:141] libmachine: (ha-604935) define libvirt domain using xml: 
	I1202 11:45:52.259691   23379 main.go:141] libmachine: (ha-604935) <domain type='kvm'>
	I1202 11:45:52.259699   23379 main.go:141] libmachine: (ha-604935)   <name>ha-604935</name>
	I1202 11:45:52.259718   23379 main.go:141] libmachine: (ha-604935)   <memory unit='MiB'>2200</memory>
	I1202 11:45:52.259726   23379 main.go:141] libmachine: (ha-604935)   <vcpu>2</vcpu>
	I1202 11:45:52.259737   23379 main.go:141] libmachine: (ha-604935)   <features>
	I1202 11:45:52.259745   23379 main.go:141] libmachine: (ha-604935)     <acpi/>
	I1202 11:45:52.259755   23379 main.go:141] libmachine: (ha-604935)     <apic/>
	I1202 11:45:52.259762   23379 main.go:141] libmachine: (ha-604935)     <pae/>
	I1202 11:45:52.259776   23379 main.go:141] libmachine: (ha-604935)     
	I1202 11:45:52.259792   23379 main.go:141] libmachine: (ha-604935)   </features>
	I1202 11:45:52.259808   23379 main.go:141] libmachine: (ha-604935)   <cpu mode='host-passthrough'>
	I1202 11:45:52.259826   23379 main.go:141] libmachine: (ha-604935)   
	I1202 11:45:52.259835   23379 main.go:141] libmachine: (ha-604935)   </cpu>
	I1202 11:45:52.259843   23379 main.go:141] libmachine: (ha-604935)   <os>
	I1202 11:45:52.259851   23379 main.go:141] libmachine: (ha-604935)     <type>hvm</type>
	I1202 11:45:52.259863   23379 main.go:141] libmachine: (ha-604935)     <boot dev='cdrom'/>
	I1202 11:45:52.259871   23379 main.go:141] libmachine: (ha-604935)     <boot dev='hd'/>
	I1202 11:45:52.259896   23379 main.go:141] libmachine: (ha-604935)     <bootmenu enable='no'/>
	I1202 11:45:52.259912   23379 main.go:141] libmachine: (ha-604935)   </os>
	I1202 11:45:52.259917   23379 main.go:141] libmachine: (ha-604935)   <devices>
	I1202 11:45:52.259925   23379 main.go:141] libmachine: (ha-604935)     <disk type='file' device='cdrom'>
	I1202 11:45:52.259935   23379 main.go:141] libmachine: (ha-604935)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/boot2docker.iso'/>
	I1202 11:45:52.259939   23379 main.go:141] libmachine: (ha-604935)       <target dev='hdc' bus='scsi'/>
	I1202 11:45:52.259944   23379 main.go:141] libmachine: (ha-604935)       <readonly/>
	I1202 11:45:52.259951   23379 main.go:141] libmachine: (ha-604935)     </disk>
	I1202 11:45:52.259956   23379 main.go:141] libmachine: (ha-604935)     <disk type='file' device='disk'>
	I1202 11:45:52.259963   23379 main.go:141] libmachine: (ha-604935)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:45:52.259970   23379 main.go:141] libmachine: (ha-604935)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/ha-604935.rawdisk'/>
	I1202 11:45:52.259978   23379 main.go:141] libmachine: (ha-604935)       <target dev='hda' bus='virtio'/>
	I1202 11:45:52.259982   23379 main.go:141] libmachine: (ha-604935)     </disk>
	I1202 11:45:52.259992   23379 main.go:141] libmachine: (ha-604935)     <interface type='network'>
	I1202 11:45:52.260000   23379 main.go:141] libmachine: (ha-604935)       <source network='mk-ha-604935'/>
	I1202 11:45:52.260004   23379 main.go:141] libmachine: (ha-604935)       <model type='virtio'/>
	I1202 11:45:52.260011   23379 main.go:141] libmachine: (ha-604935)     </interface>
	I1202 11:45:52.260015   23379 main.go:141] libmachine: (ha-604935)     <interface type='network'>
	I1202 11:45:52.260020   23379 main.go:141] libmachine: (ha-604935)       <source network='default'/>
	I1202 11:45:52.260026   23379 main.go:141] libmachine: (ha-604935)       <model type='virtio'/>
	I1202 11:45:52.260031   23379 main.go:141] libmachine: (ha-604935)     </interface>
	I1202 11:45:52.260035   23379 main.go:141] libmachine: (ha-604935)     <serial type='pty'>
	I1202 11:45:52.260040   23379 main.go:141] libmachine: (ha-604935)       <target port='0'/>
	I1202 11:45:52.260045   23379 main.go:141] libmachine: (ha-604935)     </serial>
	I1202 11:45:52.260050   23379 main.go:141] libmachine: (ha-604935)     <console type='pty'>
	I1202 11:45:52.260059   23379 main.go:141] libmachine: (ha-604935)       <target type='serial' port='0'/>
	I1202 11:45:52.260081   23379 main.go:141] libmachine: (ha-604935)     </console>
	I1202 11:45:52.260097   23379 main.go:141] libmachine: (ha-604935)     <rng model='virtio'>
	I1202 11:45:52.260105   23379 main.go:141] libmachine: (ha-604935)       <backend model='random'>/dev/random</backend>
	I1202 11:45:52.260113   23379 main.go:141] libmachine: (ha-604935)     </rng>
	I1202 11:45:52.260119   23379 main.go:141] libmachine: (ha-604935)     
	I1202 11:45:52.260131   23379 main.go:141] libmachine: (ha-604935)     
	I1202 11:45:52.260139   23379 main.go:141] libmachine: (ha-604935)   </devices>
	I1202 11:45:52.260142   23379 main.go:141] libmachine: (ha-604935) </domain>
	I1202 11:45:52.260148   23379 main.go:141] libmachine: (ha-604935) 
	I1202 11:45:52.264453   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e2:c6:db in network default
	I1202 11:45:52.264963   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:52.264976   23379 main.go:141] libmachine: (ha-604935) Ensuring networks are active...
	I1202 11:45:52.265536   23379 main.go:141] libmachine: (ha-604935) Ensuring network default is active
	I1202 11:45:52.265809   23379 main.go:141] libmachine: (ha-604935) Ensuring network mk-ha-604935 is active
	I1202 11:45:52.266301   23379 main.go:141] libmachine: (ha-604935) Getting domain xml...
	I1202 11:45:52.266972   23379 main.go:141] libmachine: (ha-604935) Creating domain...
	I1202 11:45:53.425942   23379 main.go:141] libmachine: (ha-604935) Waiting to get IP...
	I1202 11:45:53.426812   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:53.427160   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:53.427221   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:53.427145   23402 retry.go:31] will retry after 201.077519ms: waiting for machine to come up
	I1202 11:45:53.629564   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:53.629950   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:53.629976   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:53.629910   23402 retry.go:31] will retry after 339.273732ms: waiting for machine to come up
	I1202 11:45:53.970328   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:53.970740   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:53.970764   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:53.970705   23402 retry.go:31] will retry after 350.772564ms: waiting for machine to come up
	I1202 11:45:54.323244   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:54.323628   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:54.323652   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:54.323595   23402 retry.go:31] will retry after 510.154735ms: waiting for machine to come up
	I1202 11:45:54.834818   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:54.835184   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:54.835211   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:54.835141   23402 retry.go:31] will retry after 497.813223ms: waiting for machine to come up
	I1202 11:45:55.334326   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:55.334697   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:55.334728   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:55.334631   23402 retry.go:31] will retry after 593.538742ms: waiting for machine to come up
	I1202 11:45:55.929133   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:55.929547   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:55.929575   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:55.929508   23402 retry.go:31] will retry after 1.005519689s: waiting for machine to come up
	I1202 11:45:56.936100   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:56.936549   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:56.936581   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:56.936492   23402 retry.go:31] will retry after 1.273475187s: waiting for machine to come up
	I1202 11:45:58.211849   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:58.212240   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:58.212280   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:58.212213   23402 retry.go:31] will retry after 1.292529083s: waiting for machine to come up
	I1202 11:45:59.506572   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:59.506909   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:59.506934   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:59.506880   23402 retry.go:31] will retry after 1.800735236s: waiting for machine to come up
	I1202 11:46:01.309936   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:01.310447   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:01.310467   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:01.310416   23402 retry.go:31] will retry after 2.83980414s: waiting for machine to come up
	I1202 11:46:04.153261   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:04.153728   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:04.153748   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:04.153704   23402 retry.go:31] will retry after 2.497515599s: waiting for machine to come up
	I1202 11:46:06.652765   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:06.653095   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:06.653119   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:06.653068   23402 retry.go:31] will retry after 2.762441656s: waiting for machine to come up
	I1202 11:46:09.418859   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:09.419194   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:09.419220   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:09.419149   23402 retry.go:31] will retry after 3.896839408s: waiting for machine to come up
	I1202 11:46:13.318223   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:13.318677   23379 main.go:141] libmachine: (ha-604935) Found IP for machine: 192.168.39.102
	I1202 11:46:13.318696   23379 main.go:141] libmachine: (ha-604935) Reserving static IP address...
	I1202 11:46:13.318709   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has current primary IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:13.318957   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find host DHCP lease matching {name: "ha-604935", mac: "52:54:00:e0:fa:7c", ip: "192.168.39.102"} in network mk-ha-604935
	I1202 11:46:13.386650   23379 main.go:141] libmachine: (ha-604935) DBG | Getting to WaitForSSH function...
	I1202 11:46:13.386676   23379 main.go:141] libmachine: (ha-604935) Reserved static IP address: 192.168.39.102
	I1202 11:46:13.386705   23379 main.go:141] libmachine: (ha-604935) Waiting for SSH to be available...
	I1202 11:46:13.389178   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:13.389540   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935
	I1202 11:46:13.389567   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find defined IP address of network mk-ha-604935 interface with MAC address 52:54:00:e0:fa:7c
	I1202 11:46:13.389737   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH client type: external
	I1202 11:46:13.389771   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa (-rw-------)
	I1202 11:46:13.389833   23379 main.go:141] libmachine: (ha-604935) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:46:13.389853   23379 main.go:141] libmachine: (ha-604935) DBG | About to run SSH command:
	I1202 11:46:13.389865   23379 main.go:141] libmachine: (ha-604935) DBG | exit 0
	I1202 11:46:13.393280   23379 main.go:141] libmachine: (ha-604935) DBG | SSH cmd err, output: exit status 255: 
	I1202 11:46:13.393302   23379 main.go:141] libmachine: (ha-604935) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1202 11:46:13.393311   23379 main.go:141] libmachine: (ha-604935) DBG | command : exit 0
	I1202 11:46:13.393319   23379 main.go:141] libmachine: (ha-604935) DBG | err     : exit status 255
	I1202 11:46:13.393329   23379 main.go:141] libmachine: (ha-604935) DBG | output  : 
	I1202 11:46:16.395489   23379 main.go:141] libmachine: (ha-604935) DBG | Getting to WaitForSSH function...
	I1202 11:46:16.397696   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.398004   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.398035   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.398057   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH client type: external
	I1202 11:46:16.398092   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa (-rw-------)
	I1202 11:46:16.398150   23379 main.go:141] libmachine: (ha-604935) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:46:16.398173   23379 main.go:141] libmachine: (ha-604935) DBG | About to run SSH command:
	I1202 11:46:16.398186   23379 main.go:141] libmachine: (ha-604935) DBG | exit 0
	I1202 11:46:16.524025   23379 main.go:141] libmachine: (ha-604935) DBG | SSH cmd err, output: <nil>: 
	I1202 11:46:16.524319   23379 main.go:141] libmachine: (ha-604935) KVM machine creation complete!
	I1202 11:46:16.524585   23379 main.go:141] libmachine: (ha-604935) Calling .GetConfigRaw
	I1202 11:46:16.525132   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:16.525296   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:16.525429   23379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:46:16.525444   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:16.526494   23379 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:46:16.526509   23379 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:46:16.526516   23379 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:46:16.526523   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.528453   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.528856   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.528879   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.529035   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.529215   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.529389   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.529537   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.529694   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.529924   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.529940   23379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:46:16.639198   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:46:16.639221   23379 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:46:16.639229   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.641755   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.642065   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.642082   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.642197   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.642389   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.642587   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.642718   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.642866   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.643032   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.643046   23379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:46:16.748649   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:46:16.748721   23379 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:46:16.748732   23379 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:46:16.748738   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:46:16.748943   23379 buildroot.go:166] provisioning hostname "ha-604935"
	I1202 11:46:16.748965   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:46:16.749139   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.751455   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.751828   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.751862   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.751971   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.752141   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.752285   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.752419   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.752578   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.752754   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.752769   23379 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935 && echo "ha-604935" | sudo tee /etc/hostname
	I1202 11:46:16.869057   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935
	
	I1202 11:46:16.869084   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.871187   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.871464   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.871482   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.871651   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.871810   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.871940   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.872049   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.872201   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.872396   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.872412   23379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:46:16.984630   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:46:16.984655   23379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:46:16.984684   23379 buildroot.go:174] setting up certificates
	I1202 11:46:16.984696   23379 provision.go:84] configureAuth start
	I1202 11:46:16.984709   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:46:16.984946   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:16.987426   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.987732   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.987755   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.987901   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.989843   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.990098   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.990122   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.990257   23379 provision.go:143] copyHostCerts
	I1202 11:46:16.990285   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:46:16.990325   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:46:16.990334   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:46:16.990403   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:46:16.990485   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:46:16.990508   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:46:16.990522   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:46:16.990547   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:46:16.990600   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:46:16.990616   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:46:16.990622   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:46:16.990641   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:46:16.990697   23379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935 san=[127.0.0.1 192.168.39.102 ha-604935 localhost minikube]
	I1202 11:46:17.091711   23379 provision.go:177] copyRemoteCerts
	I1202 11:46:17.091762   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:46:17.091783   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.093867   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.094147   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.094176   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.094310   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.094467   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.094595   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.094701   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.178212   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:46:17.178264   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:46:17.201820   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:46:17.201876   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:46:17.224492   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:46:17.224550   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1202 11:46:17.246969   23379 provision.go:87] duration metric: took 262.263543ms to configureAuth
	I1202 11:46:17.246987   23379 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:46:17.247165   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:46:17.247239   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.249583   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.249877   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.249899   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.250032   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.250183   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.250315   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.250423   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.250529   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:17.250670   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:17.250686   23379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:46:17.469650   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:46:17.469676   23379 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:46:17.469685   23379 main.go:141] libmachine: (ha-604935) Calling .GetURL
	I1202 11:46:17.470859   23379 main.go:141] libmachine: (ha-604935) DBG | Using libvirt version 6000000
	I1202 11:46:17.472792   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.473049   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.473078   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.473161   23379 main.go:141] libmachine: Docker is up and running!
	I1202 11:46:17.473172   23379 main.go:141] libmachine: Reticulating splines...
	I1202 11:46:17.473179   23379 client.go:171] duration metric: took 25.91325953s to LocalClient.Create
	I1202 11:46:17.473201   23379 start.go:167] duration metric: took 25.913314916s to libmachine.API.Create "ha-604935"
	I1202 11:46:17.473214   23379 start.go:293] postStartSetup for "ha-604935" (driver="kvm2")
	I1202 11:46:17.473228   23379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:46:17.473243   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.473431   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:46:17.473460   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.475686   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.475977   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.476003   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.476117   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.476292   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.476424   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.476570   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.558504   23379 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:46:17.562731   23379 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:46:17.562753   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:46:17.562801   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:46:17.562870   23379 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:46:17.562886   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:46:17.562973   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:46:17.572589   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:46:17.596338   23379 start.go:296] duration metric: took 123.108175ms for postStartSetup
	I1202 11:46:17.596385   23379 main.go:141] libmachine: (ha-604935) Calling .GetConfigRaw
	I1202 11:46:17.596933   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:17.599535   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.599863   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.599888   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.600036   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:46:17.600197   23379 start.go:128] duration metric: took 26.057419293s to createHost
	I1202 11:46:17.600216   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.602393   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.602679   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.602700   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.602888   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.603033   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.603150   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.603243   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.603351   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:17.603548   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:17.603565   23379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:46:17.708694   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733139977.687468447
	
	I1202 11:46:17.708715   23379 fix.go:216] guest clock: 1733139977.687468447
	I1202 11:46:17.708724   23379 fix.go:229] Guest: 2024-12-02 11:46:17.687468447 +0000 UTC Remote: 2024-12-02 11:46:17.600208028 +0000 UTC m=+26.158965969 (delta=87.260419ms)
	I1202 11:46:17.708747   23379 fix.go:200] guest clock delta is within tolerance: 87.260419ms
	I1202 11:46:17.708757   23379 start.go:83] releasing machines lock for "ha-604935", held for 26.166055586s
	I1202 11:46:17.708779   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.708992   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:17.711541   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.711821   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.711843   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.711972   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.712458   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.712646   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.712736   23379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:46:17.712776   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.712829   23379 ssh_runner.go:195] Run: cat /version.json
	I1202 11:46:17.712853   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.715060   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.715759   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.715798   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.715960   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.716014   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.716187   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.716313   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.716339   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.716347   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.716430   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.716502   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.716582   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.716706   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.716827   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.792614   23379 ssh_runner.go:195] Run: systemctl --version
	I1202 11:46:17.813470   23379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:46:17.973535   23379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:46:17.979920   23379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:46:17.979975   23379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:46:17.995437   23379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:46:17.995459   23379 start.go:495] detecting cgroup driver to use...
	I1202 11:46:17.995503   23379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:46:18.012152   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:46:18.026749   23379 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:46:18.026813   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:46:18.040895   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:46:18.054867   23379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:46:18.182673   23379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:46:18.307537   23379 docker.go:233] disabling docker service ...
	I1202 11:46:18.307608   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:46:18.321854   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:46:18.334016   23379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:46:18.463785   23379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:46:18.581750   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:46:18.594915   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:46:18.612956   23379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:46:18.613013   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.623443   23379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:46:18.623494   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.633789   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.643912   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.654023   23379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:46:18.664581   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.674994   23379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.691561   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.701797   23379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:46:18.711042   23379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:46:18.711090   23379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:46:18.724638   23379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:46:18.733743   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:46:18.862034   23379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:46:18.949557   23379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:46:18.949630   23379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:46:18.954402   23379 start.go:563] Will wait 60s for crictl version
	I1202 11:46:18.954482   23379 ssh_runner.go:195] Run: which crictl
	I1202 11:46:18.958128   23379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:46:18.997454   23379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:46:18.997519   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:46:19.025104   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:46:19.055599   23379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:46:19.056875   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:19.059223   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:19.059530   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:19.059555   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:19.059704   23379 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:46:19.063855   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:46:19.078703   23379 kubeadm.go:883] updating cluster {Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 11:46:19.078793   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:46:19.078828   23379 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:46:19.116305   23379 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 11:46:19.116376   23379 ssh_runner.go:195] Run: which lz4
	I1202 11:46:19.120271   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1202 11:46:19.120778   23379 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 11:46:19.126218   23379 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 11:46:19.126239   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 11:46:20.425373   23379 crio.go:462] duration metric: took 1.305048201s to copy over tarball
	I1202 11:46:20.425452   23379 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 11:46:22.441192   23379 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.01571139s)
	I1202 11:46:22.441225   23379 crio.go:469] duration metric: took 2.015821089s to extract the tarball
	I1202 11:46:22.441233   23379 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 11:46:22.478991   23379 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:46:22.530052   23379 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:46:22.530074   23379 cache_images.go:84] Images are preloaded, skipping loading
	I1202 11:46:22.530083   23379 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.2 crio true true} ...
	I1202 11:46:22.530186   23379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:46:22.530263   23379 ssh_runner.go:195] Run: crio config
	I1202 11:46:22.572985   23379 cni.go:84] Creating CNI manager for ""
	I1202 11:46:22.573005   23379 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1202 11:46:22.573014   23379 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 11:46:22.573034   23379 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-604935 NodeName:ha-604935 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 11:46:22.573152   23379 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-604935"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.102"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 11:46:22.573183   23379 kube-vip.go:115] generating kube-vip config ...
	I1202 11:46:22.573233   23379 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:46:22.589221   23379 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:46:22.589338   23379 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:46:22.589405   23379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:46:22.599190   23379 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:46:22.599242   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 11:46:22.608607   23379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1202 11:46:22.624652   23379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:46:22.640379   23379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1202 11:46:22.655900   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1202 11:46:22.671590   23379 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:46:22.675287   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:46:22.687449   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:46:22.815343   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:46:22.830770   23379 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.102
	I1202 11:46:22.830783   23379 certs.go:194] generating shared ca certs ...
	I1202 11:46:22.830798   23379 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:22.830938   23379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:46:22.830989   23379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:46:22.831001   23379 certs.go:256] generating profile certs ...
	I1202 11:46:22.831074   23379 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:46:22.831100   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt with IP's: []
	I1202 11:46:22.963911   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt ...
	I1202 11:46:22.963935   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt: {Name:mk5750a5db627315b9b01ec40b88a97f880b8d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:22.964093   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key ...
	I1202 11:46:22.964105   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key: {Name:mk12b4799c6c082b6ae6dcb6d50922caccda6be2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:22.964176   23379 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd
	I1202 11:46:22.964216   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I1202 11:46:23.245751   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd ...
	I1202 11:46:23.245777   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd: {Name:mkd02d0517ee36862fb48fa866d0eddc37aac5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.245919   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd ...
	I1202 11:46:23.245934   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd: {Name:mkafae41baf5ffd85374c686e8a6a230d6cd62ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.246014   23379 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:46:23.246102   23379 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:46:23.246163   23379 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:46:23.246178   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt with IP's: []
	I1202 11:46:23.398901   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt ...
	I1202 11:46:23.398937   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt: {Name:mk59ab7004f92d658850310a3f6a84461f824e18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.399105   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key ...
	I1202 11:46:23.399117   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key: {Name:mk4341731ba8ea8693d50dafd7cfc413608c74fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.399195   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:46:23.399214   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:46:23.399232   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:46:23.399248   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:46:23.399263   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:46:23.399278   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:46:23.399293   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:46:23.399307   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:46:23.399357   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:46:23.399393   23379 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:46:23.399404   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:46:23.399426   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:46:23.399453   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:46:23.399485   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:46:23.399528   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:46:23.399560   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.399576   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.399590   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.400135   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:46:23.425287   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:46:23.447899   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:46:23.470786   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:46:23.493867   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 11:46:23.517308   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 11:46:23.540273   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:46:23.562862   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:46:23.587751   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:46:23.615307   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:46:23.645819   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:46:23.670226   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 11:46:23.686120   23379 ssh_runner.go:195] Run: openssl version
	I1202 11:46:23.691724   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:46:23.702611   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.706991   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.707032   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.712771   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:46:23.723671   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:46:23.734402   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.738713   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.738746   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.744060   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:46:23.754804   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:46:23.765363   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.769594   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.769630   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.774953   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:46:23.785412   23379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:46:23.789341   23379 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:46:23.789402   23379 kubeadm.go:392] StartCluster: {Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:46:23.789461   23379 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 11:46:23.789507   23379 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 11:46:23.829185   23379 cri.go:89] found id: ""
	I1202 11:46:23.829258   23379 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 11:46:23.839482   23379 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 11:46:23.849018   23379 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 11:46:23.858723   23379 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 11:46:23.858741   23379 kubeadm.go:157] found existing configuration files:
	
	I1202 11:46:23.858784   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 11:46:23.867813   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 11:46:23.867858   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 11:46:23.877083   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 11:46:23.886137   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 11:46:23.886182   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 11:46:23.895526   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 11:46:23.904513   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 11:46:23.904574   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 11:46:23.913938   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 11:46:23.922913   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 11:46:23.922950   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 11:46:23.932249   23379 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 11:46:24.043553   23379 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 11:46:24.043623   23379 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 11:46:24.150207   23379 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 11:46:24.150352   23379 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 11:46:24.150497   23379 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 11:46:24.159626   23379 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 11:46:24.195667   23379 out.go:235]   - Generating certificates and keys ...
	I1202 11:46:24.195776   23379 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 11:46:24.195834   23379 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 11:46:24.358436   23379 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 11:46:24.683719   23379 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 11:46:24.943667   23379 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 11:46:25.032560   23379 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 11:46:25.140726   23379 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 11:46:25.140883   23379 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-604935 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1202 11:46:25.414720   23379 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 11:46:25.414972   23379 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-604935 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1202 11:46:25.596308   23379 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 11:46:25.682848   23379 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 11:46:25.908682   23379 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 11:46:25.908968   23379 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 11:46:26.057865   23379 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 11:46:26.190529   23379 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 11:46:26.320151   23379 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 11:46:26.522118   23379 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 11:46:26.687579   23379 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 11:46:26.688353   23379 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 11:46:26.693709   23379 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 11:46:26.695397   23379 out.go:235]   - Booting up control plane ...
	I1202 11:46:26.695494   23379 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 11:46:26.695563   23379 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 11:46:26.696118   23379 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 11:46:26.712309   23379 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 11:46:26.721469   23379 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 11:46:26.721525   23379 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 11:46:26.849672   23379 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 11:46:26.849831   23379 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 11:46:27.850918   23379 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001821143s
	I1202 11:46:27.850997   23379 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 11:46:33.482873   23379 kubeadm.go:310] [api-check] The API server is healthy after 5.633037057s
	I1202 11:46:33.492749   23379 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 11:46:33.512336   23379 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 11:46:34.037238   23379 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 11:46:34.037452   23379 kubeadm.go:310] [mark-control-plane] Marking the node ha-604935 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 11:46:34.050856   23379 kubeadm.go:310] [bootstrap-token] Using token: 8kw29b.di3rsap6xz9ot94t
	I1202 11:46:34.052035   23379 out.go:235]   - Configuring RBAC rules ...
	I1202 11:46:34.052182   23379 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 11:46:34.058440   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 11:46:34.073861   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 11:46:34.076499   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 11:46:34.079628   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 11:46:34.084760   23379 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 11:46:34.097556   23379 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 11:46:34.326607   23379 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 11:46:34.887901   23379 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 11:46:34.889036   23379 kubeadm.go:310] 
	I1202 11:46:34.889140   23379 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 11:46:34.889169   23379 kubeadm.go:310] 
	I1202 11:46:34.889273   23379 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 11:46:34.889281   23379 kubeadm.go:310] 
	I1202 11:46:34.889308   23379 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 11:46:34.889389   23379 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 11:46:34.889465   23379 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 11:46:34.889475   23379 kubeadm.go:310] 
	I1202 11:46:34.889554   23379 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 11:46:34.889564   23379 kubeadm.go:310] 
	I1202 11:46:34.889639   23379 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 11:46:34.889649   23379 kubeadm.go:310] 
	I1202 11:46:34.889720   23379 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 11:46:34.889845   23379 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 11:46:34.889909   23379 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 11:46:34.889916   23379 kubeadm.go:310] 
	I1202 11:46:34.889990   23379 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 11:46:34.890073   23379 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 11:46:34.890084   23379 kubeadm.go:310] 
	I1202 11:46:34.890170   23379 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8kw29b.di3rsap6xz9ot94t \
	I1202 11:46:34.890282   23379 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 11:46:34.890321   23379 kubeadm.go:310] 	--control-plane 
	I1202 11:46:34.890328   23379 kubeadm.go:310] 
	I1202 11:46:34.890409   23379 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 11:46:34.890416   23379 kubeadm.go:310] 
	I1202 11:46:34.890483   23379 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8kw29b.di3rsap6xz9ot94t \
	I1202 11:46:34.890568   23379 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 11:46:34.891577   23379 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 11:46:34.891597   23379 cni.go:84] Creating CNI manager for ""
	I1202 11:46:34.891603   23379 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1202 11:46:34.892960   23379 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1202 11:46:34.893988   23379 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 11:46:34.899231   23379 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1202 11:46:34.899255   23379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 11:46:34.917969   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 11:46:35.272118   23379 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 11:46:35.272198   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:35.272259   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-604935 minikube.k8s.io/updated_at=2024_12_02T11_46_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=ha-604935 minikube.k8s.io/primary=true
	I1202 11:46:35.310028   23379 ops.go:34] apiserver oom_adj: -16
	I1202 11:46:35.408095   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:35.908268   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:36.408944   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:36.909158   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:37.408454   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:37.909038   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:38.408700   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:38.908314   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:39.023834   23379 kubeadm.go:1113] duration metric: took 3.751689624s to wait for elevateKubeSystemPrivileges
	I1202 11:46:39.023871   23379 kubeadm.go:394] duration metric: took 15.234471878s to StartCluster
	I1202 11:46:39.023890   23379 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:39.023968   23379 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:46:39.024843   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:39.025096   23379 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:46:39.025129   23379 start.go:241] waiting for startup goroutines ...
	I1202 11:46:39.025139   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 11:46:39.025146   23379 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 11:46:39.025247   23379 addons.go:69] Setting storage-provisioner=true in profile "ha-604935"
	I1202 11:46:39.025268   23379 addons.go:234] Setting addon storage-provisioner=true in "ha-604935"
	I1202 11:46:39.025297   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:46:39.025365   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:46:39.025267   23379 addons.go:69] Setting default-storageclass=true in profile "ha-604935"
	I1202 11:46:39.025420   23379 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-604935"
	I1202 11:46:39.025726   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.025773   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.025867   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.025904   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.040510   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I1202 11:46:39.040567   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I1202 11:46:39.041007   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.041111   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.041500   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.041519   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.041642   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.041669   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.041855   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.042005   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.042156   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:39.042501   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.042547   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.044200   23379 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:46:39.044508   23379 kapi.go:59] client config for ha-604935: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 11:46:39.044954   23379 cert_rotation.go:140] Starting client certificate rotation controller
	I1202 11:46:39.045176   23379 addons.go:234] Setting addon default-storageclass=true in "ha-604935"
	I1202 11:46:39.045212   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:46:39.045509   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.045548   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.056740   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I1202 11:46:39.057180   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.057736   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.057761   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.058043   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.058254   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:39.059103   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I1202 11:46:39.059506   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.059989   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.060003   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.060030   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:39.060305   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.060780   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.060821   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.061507   23379 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 11:46:39.062672   23379 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:46:39.062687   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 11:46:39.062700   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:39.065792   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.066230   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:39.066257   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.066378   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:39.066549   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:39.066694   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:39.066850   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:39.076289   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32769
	I1202 11:46:39.076690   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.077099   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.077122   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.077418   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.077579   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:39.079081   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:39.079273   23379 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 11:46:39.079287   23379 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 11:46:39.079300   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:39.082143   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.082579   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:39.082597   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.082752   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:39.082910   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:39.083074   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:39.083219   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:39.138927   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 11:46:39.202502   23379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:46:39.264780   23379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 11:46:39.722155   23379 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1202 11:46:39.944980   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945000   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945116   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945141   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945269   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945284   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945292   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945298   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945459   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945489   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945500   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945513   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945457   23379 main.go:141] libmachine: (ha-604935) DBG | Closing plugin on server side
	I1202 11:46:39.945578   23379 main.go:141] libmachine: (ha-604935) DBG | Closing plugin on server side
	I1202 11:46:39.945581   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945620   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945796   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945844   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945933   23379 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 11:46:39.945977   23379 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 11:46:39.945813   23379 main.go:141] libmachine: (ha-604935) DBG | Closing plugin on server side
	I1202 11:46:39.946087   23379 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1202 11:46:39.946099   23379 round_trippers.go:469] Request Headers:
	I1202 11:46:39.946109   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:46:39.946117   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:46:39.963939   23379 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1202 11:46:39.964651   23379 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1202 11:46:39.964667   23379 round_trippers.go:469] Request Headers:
	I1202 11:46:39.964677   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:46:39.964684   23379 round_trippers.go:473]     Content-Type: application/json
	I1202 11:46:39.964689   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:46:39.968484   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:46:39.968627   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.968639   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.968886   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.968902   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.970238   23379 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1202 11:46:39.971383   23379 addons.go:510] duration metric: took 946.244666ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 11:46:39.971420   23379 start.go:246] waiting for cluster config update ...
	I1202 11:46:39.971435   23379 start.go:255] writing updated cluster config ...
	I1202 11:46:39.972900   23379 out.go:201] 
	I1202 11:46:39.974083   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:46:39.974147   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:46:39.975564   23379 out.go:177] * Starting "ha-604935-m02" control-plane node in "ha-604935" cluster
	I1202 11:46:39.976682   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:46:39.976701   23379 cache.go:56] Caching tarball of preloaded images
	I1202 11:46:39.976788   23379 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:46:39.976800   23379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:46:39.976872   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:46:39.977100   23379 start.go:360] acquireMachinesLock for ha-604935-m02: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:46:39.977152   23379 start.go:364] duration metric: took 22.26µs to acquireMachinesLock for "ha-604935-m02"
	I1202 11:46:39.977175   23379 start.go:93] Provisioning new machine with config: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:46:39.977250   23379 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1202 11:46:39.978689   23379 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 11:46:39.978765   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.978800   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.993356   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39439
	I1202 11:46:39.993775   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.994235   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.994266   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.994666   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.994881   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:46:39.995033   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:46:39.995225   23379 start.go:159] libmachine.API.Create for "ha-604935" (driver="kvm2")
	I1202 11:46:39.995256   23379 client.go:168] LocalClient.Create starting
	I1202 11:46:39.995293   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:46:39.995339   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:46:39.995364   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:46:39.995433   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:46:39.995460   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:46:39.995482   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:46:39.995508   23379 main.go:141] libmachine: Running pre-create checks...
	I1202 11:46:39.995520   23379 main.go:141] libmachine: (ha-604935-m02) Calling .PreCreateCheck
	I1202 11:46:39.995688   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetConfigRaw
	I1202 11:46:39.996035   23379 main.go:141] libmachine: Creating machine...
	I1202 11:46:39.996049   23379 main.go:141] libmachine: (ha-604935-m02) Calling .Create
	I1202 11:46:39.996158   23379 main.go:141] libmachine: (ha-604935-m02) Creating KVM machine...
	I1202 11:46:39.997515   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found existing default KVM network
	I1202 11:46:39.997667   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found existing private KVM network mk-ha-604935
	I1202 11:46:39.997862   23379 main.go:141] libmachine: (ha-604935-m02) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02 ...
	I1202 11:46:39.997894   23379 main.go:141] libmachine: (ha-604935-m02) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:46:39.997973   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:39.997863   23734 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:46:39.998066   23379 main.go:141] libmachine: (ha-604935-m02) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:46:40.246601   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:40.246459   23734 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa...
	I1202 11:46:40.345704   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:40.345606   23734 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/ha-604935-m02.rawdisk...
	I1202 11:46:40.345732   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Writing magic tar header
	I1202 11:46:40.345746   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Writing SSH key tar header
	I1202 11:46:40.345760   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:40.345732   23734 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02 ...
	I1202 11:46:40.345873   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02
	I1202 11:46:40.345899   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:46:40.345912   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02 (perms=drwx------)
	I1202 11:46:40.345936   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:46:40.345967   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:46:40.345981   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:46:40.345991   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:46:40.346001   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:46:40.346014   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home
	I1202 11:46:40.346025   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Skipping /home - not owner
	I1202 11:46:40.346072   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:46:40.346108   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:46:40.346124   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:46:40.346137   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:46:40.346162   23379 main.go:141] libmachine: (ha-604935-m02) Creating domain...
	I1202 11:46:40.346895   23379 main.go:141] libmachine: (ha-604935-m02) define libvirt domain using xml: 
	I1202 11:46:40.346916   23379 main.go:141] libmachine: (ha-604935-m02) <domain type='kvm'>
	I1202 11:46:40.346942   23379 main.go:141] libmachine: (ha-604935-m02)   <name>ha-604935-m02</name>
	I1202 11:46:40.346957   23379 main.go:141] libmachine: (ha-604935-m02)   <memory unit='MiB'>2200</memory>
	I1202 11:46:40.346974   23379 main.go:141] libmachine: (ha-604935-m02)   <vcpu>2</vcpu>
	I1202 11:46:40.346979   23379 main.go:141] libmachine: (ha-604935-m02)   <features>
	I1202 11:46:40.346986   23379 main.go:141] libmachine: (ha-604935-m02)     <acpi/>
	I1202 11:46:40.346990   23379 main.go:141] libmachine: (ha-604935-m02)     <apic/>
	I1202 11:46:40.346995   23379 main.go:141] libmachine: (ha-604935-m02)     <pae/>
	I1202 11:46:40.347001   23379 main.go:141] libmachine: (ha-604935-m02)     
	I1202 11:46:40.347008   23379 main.go:141] libmachine: (ha-604935-m02)   </features>
	I1202 11:46:40.347027   23379 main.go:141] libmachine: (ha-604935-m02)   <cpu mode='host-passthrough'>
	I1202 11:46:40.347034   23379 main.go:141] libmachine: (ha-604935-m02)   
	I1202 11:46:40.347038   23379 main.go:141] libmachine: (ha-604935-m02)   </cpu>
	I1202 11:46:40.347043   23379 main.go:141] libmachine: (ha-604935-m02)   <os>
	I1202 11:46:40.347049   23379 main.go:141] libmachine: (ha-604935-m02)     <type>hvm</type>
	I1202 11:46:40.347054   23379 main.go:141] libmachine: (ha-604935-m02)     <boot dev='cdrom'/>
	I1202 11:46:40.347060   23379 main.go:141] libmachine: (ha-604935-m02)     <boot dev='hd'/>
	I1202 11:46:40.347066   23379 main.go:141] libmachine: (ha-604935-m02)     <bootmenu enable='no'/>
	I1202 11:46:40.347072   23379 main.go:141] libmachine: (ha-604935-m02)   </os>
	I1202 11:46:40.347077   23379 main.go:141] libmachine: (ha-604935-m02)   <devices>
	I1202 11:46:40.347082   23379 main.go:141] libmachine: (ha-604935-m02)     <disk type='file' device='cdrom'>
	I1202 11:46:40.347089   23379 main.go:141] libmachine: (ha-604935-m02)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/boot2docker.iso'/>
	I1202 11:46:40.347096   23379 main.go:141] libmachine: (ha-604935-m02)       <target dev='hdc' bus='scsi'/>
	I1202 11:46:40.347101   23379 main.go:141] libmachine: (ha-604935-m02)       <readonly/>
	I1202 11:46:40.347105   23379 main.go:141] libmachine: (ha-604935-m02)     </disk>
	I1202 11:46:40.347111   23379 main.go:141] libmachine: (ha-604935-m02)     <disk type='file' device='disk'>
	I1202 11:46:40.347118   23379 main.go:141] libmachine: (ha-604935-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:46:40.347128   23379 main.go:141] libmachine: (ha-604935-m02)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/ha-604935-m02.rawdisk'/>
	I1202 11:46:40.347135   23379 main.go:141] libmachine: (ha-604935-m02)       <target dev='hda' bus='virtio'/>
	I1202 11:46:40.347140   23379 main.go:141] libmachine: (ha-604935-m02)     </disk>
	I1202 11:46:40.347144   23379 main.go:141] libmachine: (ha-604935-m02)     <interface type='network'>
	I1202 11:46:40.347152   23379 main.go:141] libmachine: (ha-604935-m02)       <source network='mk-ha-604935'/>
	I1202 11:46:40.347156   23379 main.go:141] libmachine: (ha-604935-m02)       <model type='virtio'/>
	I1202 11:46:40.347162   23379 main.go:141] libmachine: (ha-604935-m02)     </interface>
	I1202 11:46:40.347167   23379 main.go:141] libmachine: (ha-604935-m02)     <interface type='network'>
	I1202 11:46:40.347172   23379 main.go:141] libmachine: (ha-604935-m02)       <source network='default'/>
	I1202 11:46:40.347178   23379 main.go:141] libmachine: (ha-604935-m02)       <model type='virtio'/>
	I1202 11:46:40.347183   23379 main.go:141] libmachine: (ha-604935-m02)     </interface>
	I1202 11:46:40.347187   23379 main.go:141] libmachine: (ha-604935-m02)     <serial type='pty'>
	I1202 11:46:40.347194   23379 main.go:141] libmachine: (ha-604935-m02)       <target port='0'/>
	I1202 11:46:40.347204   23379 main.go:141] libmachine: (ha-604935-m02)     </serial>
	I1202 11:46:40.347211   23379 main.go:141] libmachine: (ha-604935-m02)     <console type='pty'>
	I1202 11:46:40.347221   23379 main.go:141] libmachine: (ha-604935-m02)       <target type='serial' port='0'/>
	I1202 11:46:40.347236   23379 main.go:141] libmachine: (ha-604935-m02)     </console>
	I1202 11:46:40.347247   23379 main.go:141] libmachine: (ha-604935-m02)     <rng model='virtio'>
	I1202 11:46:40.347255   23379 main.go:141] libmachine: (ha-604935-m02)       <backend model='random'>/dev/random</backend>
	I1202 11:46:40.347264   23379 main.go:141] libmachine: (ha-604935-m02)     </rng>
	I1202 11:46:40.347271   23379 main.go:141] libmachine: (ha-604935-m02)     
	I1202 11:46:40.347282   23379 main.go:141] libmachine: (ha-604935-m02)     
	I1202 11:46:40.347295   23379 main.go:141] libmachine: (ha-604935-m02)   </devices>
	I1202 11:46:40.347306   23379 main.go:141] libmachine: (ha-604935-m02) </domain>
	I1202 11:46:40.347319   23379 main.go:141] libmachine: (ha-604935-m02) 
	I1202 11:46:40.353726   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:2b:bd:2e in network default
	I1202 11:46:40.354276   23379 main.go:141] libmachine: (ha-604935-m02) Ensuring networks are active...
	I1202 11:46:40.354296   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:40.355011   23379 main.go:141] libmachine: (ha-604935-m02) Ensuring network default is active
	I1202 11:46:40.355333   23379 main.go:141] libmachine: (ha-604935-m02) Ensuring network mk-ha-604935 is active
	I1202 11:46:40.355771   23379 main.go:141] libmachine: (ha-604935-m02) Getting domain xml...
	I1202 11:46:40.356531   23379 main.go:141] libmachine: (ha-604935-m02) Creating domain...
	I1202 11:46:41.552192   23379 main.go:141] libmachine: (ha-604935-m02) Waiting to get IP...
	I1202 11:46:41.552923   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:41.553342   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:41.553365   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:41.553311   23734 retry.go:31] will retry after 250.26239ms: waiting for machine to come up
	I1202 11:46:41.804774   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:41.805224   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:41.805252   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:41.805182   23734 retry.go:31] will retry after 337.906383ms: waiting for machine to come up
	I1202 11:46:42.144697   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:42.145141   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:42.145174   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:42.145097   23734 retry.go:31] will retry after 345.416251ms: waiting for machine to come up
	I1202 11:46:42.491650   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:42.492205   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:42.492269   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:42.492187   23734 retry.go:31] will retry after 576.231118ms: waiting for machine to come up
	I1202 11:46:43.069832   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:43.070232   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:43.070258   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:43.070185   23734 retry.go:31] will retry after 484.637024ms: waiting for machine to come up
	I1202 11:46:43.557338   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:43.557918   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:43.557945   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:43.557876   23734 retry.go:31] will retry after 878.448741ms: waiting for machine to come up
	I1202 11:46:44.437501   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:44.437938   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:44.437963   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:44.437910   23734 retry.go:31] will retry after 1.136235758s: waiting for machine to come up
	I1202 11:46:45.575985   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:45.576450   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:45.576493   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:45.576415   23734 retry.go:31] will retry after 1.136366132s: waiting for machine to come up
	I1202 11:46:46.714826   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:46.715252   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:46.715280   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:46.715201   23734 retry.go:31] will retry after 1.737559308s: waiting for machine to come up
	I1202 11:46:48.455006   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:48.455487   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:48.455517   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:48.455436   23734 retry.go:31] will retry after 1.586005802s: waiting for machine to come up
	I1202 11:46:50.042947   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:50.043522   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:50.043548   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:50.043471   23734 retry.go:31] will retry after 1.94342421s: waiting for machine to come up
	I1202 11:46:51.988099   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:51.988615   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:51.988639   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:51.988575   23734 retry.go:31] will retry after 3.527601684s: waiting for machine to come up
	I1202 11:46:55.517564   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:55.518092   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:55.518121   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:55.518041   23734 retry.go:31] will retry after 3.578241105s: waiting for machine to come up
	I1202 11:46:59.097310   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:59.097631   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:59.097651   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:59.097596   23734 retry.go:31] will retry after 5.085934719s: waiting for machine to come up
	I1202 11:47:04.187907   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:04.188401   23379 main.go:141] libmachine: (ha-604935-m02) Found IP for machine: 192.168.39.96
	I1202 11:47:04.188429   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has current primary IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:04.188437   23379 main.go:141] libmachine: (ha-604935-m02) Reserving static IP address...
	I1202 11:47:04.188743   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find host DHCP lease matching {name: "ha-604935-m02", mac: "52:54:00:42:3a:28", ip: "192.168.39.96"} in network mk-ha-604935
	I1202 11:47:04.256531   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Getting to WaitForSSH function...
	I1202 11:47:04.256562   23379 main.go:141] libmachine: (ha-604935-m02) Reserved static IP address: 192.168.39.96
	I1202 11:47:04.256575   23379 main.go:141] libmachine: (ha-604935-m02) Waiting for SSH to be available...
	I1202 11:47:04.258823   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:04.259113   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935
	I1202 11:47:04.259157   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find defined IP address of network mk-ha-604935 interface with MAC address 52:54:00:42:3a:28
	I1202 11:47:04.259288   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH client type: external
	I1202 11:47:04.259308   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa (-rw-------)
	I1202 11:47:04.259373   23379 main.go:141] libmachine: (ha-604935-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:47:04.259397   23379 main.go:141] libmachine: (ha-604935-m02) DBG | About to run SSH command:
	I1202 11:47:04.259411   23379 main.go:141] libmachine: (ha-604935-m02) DBG | exit 0
	I1202 11:47:04.263986   23379 main.go:141] libmachine: (ha-604935-m02) DBG | SSH cmd err, output: exit status 255: 
	I1202 11:47:04.264009   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1202 11:47:04.264016   23379 main.go:141] libmachine: (ha-604935-m02) DBG | command : exit 0
	I1202 11:47:04.264041   23379 main.go:141] libmachine: (ha-604935-m02) DBG | err     : exit status 255
	I1202 11:47:04.264051   23379 main.go:141] libmachine: (ha-604935-m02) DBG | output  : 
	I1202 11:47:07.264654   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Getting to WaitForSSH function...
	I1202 11:47:07.266849   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.267221   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.267249   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.267406   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH client type: external
	I1202 11:47:07.267434   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa (-rw-------)
	I1202 11:47:07.267472   23379 main.go:141] libmachine: (ha-604935-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:47:07.267495   23379 main.go:141] libmachine: (ha-604935-m02) DBG | About to run SSH command:
	I1202 11:47:07.267507   23379 main.go:141] libmachine: (ha-604935-m02) DBG | exit 0
	I1202 11:47:07.391931   23379 main.go:141] libmachine: (ha-604935-m02) DBG | SSH cmd err, output: <nil>: 
	I1202 11:47:07.392120   23379 main.go:141] libmachine: (ha-604935-m02) KVM machine creation complete!
	I1202 11:47:07.392498   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetConfigRaw
	I1202 11:47:07.393039   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:07.393215   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:07.393337   23379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:47:07.393354   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetState
	I1202 11:47:07.394565   23379 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:47:07.394578   23379 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:47:07.394584   23379 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:47:07.394589   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.396709   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.397006   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.397033   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.397522   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.398890   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.399081   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.399216   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.399356   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.399544   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.399555   23379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:47:07.503380   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:47:07.503409   23379 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:47:07.503420   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.506083   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.506469   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.506502   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.506641   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.506811   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.506958   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.507087   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.507236   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.507398   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.507407   23379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:47:07.612741   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:47:07.612843   23379 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:47:07.612858   23379 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:47:07.612872   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:47:07.613105   23379 buildroot.go:166] provisioning hostname "ha-604935-m02"
	I1202 11:47:07.613126   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:47:07.613280   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.615682   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.616001   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.616029   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.616193   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.616355   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.616496   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.616615   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.616752   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.616925   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.616942   23379 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935-m02 && echo "ha-604935-m02" | sudo tee /etc/hostname
	I1202 11:47:07.739596   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935-m02
	
	I1202 11:47:07.739622   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.742125   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.742500   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.742532   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.742709   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.742872   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.743043   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.743173   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.743334   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.743539   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.743561   23379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:47:07.857236   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:47:07.857259   23379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:47:07.857284   23379 buildroot.go:174] setting up certificates
	I1202 11:47:07.857292   23379 provision.go:84] configureAuth start
	I1202 11:47:07.857300   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:47:07.857527   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:07.860095   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.860513   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.860543   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.860692   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.862585   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.862958   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.862988   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.863114   23379 provision.go:143] copyHostCerts
	I1202 11:47:07.863150   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:47:07.863186   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:47:07.863197   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:47:07.863272   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:47:07.863374   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:47:07.863401   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:47:07.863412   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:47:07.863452   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:47:07.863528   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:47:07.863553   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:47:07.863563   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:47:07.863595   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:47:07.863674   23379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935-m02 san=[127.0.0.1 192.168.39.96 ha-604935-m02 localhost minikube]
	I1202 11:47:08.103724   23379 provision.go:177] copyRemoteCerts
	I1202 11:47:08.103779   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:47:08.103802   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.106490   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.106829   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.106859   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.107025   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.107200   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.107328   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.107425   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.190303   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:47:08.190378   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:47:08.217749   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:47:08.217812   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:47:08.240576   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:47:08.240626   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:47:08.263351   23379 provision.go:87] duration metric: took 406.049409ms to configureAuth
	I1202 11:47:08.263374   23379 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:47:08.263549   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:08.263627   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.266183   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.266506   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.266542   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.266657   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.266822   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.266953   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.267045   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.267212   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:08.267440   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:08.267458   23379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:47:08.480702   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:47:08.480726   23379 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:47:08.480737   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetURL
	I1202 11:47:08.481946   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using libvirt version 6000000
	I1202 11:47:08.484074   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.484465   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.484486   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.484652   23379 main.go:141] libmachine: Docker is up and running!
	I1202 11:47:08.484665   23379 main.go:141] libmachine: Reticulating splines...
	I1202 11:47:08.484672   23379 client.go:171] duration metric: took 28.489409707s to LocalClient.Create
	I1202 11:47:08.484691   23379 start.go:167] duration metric: took 28.489467042s to libmachine.API.Create "ha-604935"
	I1202 11:47:08.484701   23379 start.go:293] postStartSetup for "ha-604935-m02" (driver="kvm2")
	I1202 11:47:08.484710   23379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:47:08.484726   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.484947   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:47:08.484979   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.487275   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.487627   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.487652   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.487763   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.487916   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.488023   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.488157   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.570418   23379 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:47:08.574644   23379 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:47:08.574668   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:47:08.574734   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:47:08.574834   23379 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:47:08.574847   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:47:08.574955   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:47:08.584296   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:47:08.607137   23379 start.go:296] duration metric: took 122.426316ms for postStartSetup
	I1202 11:47:08.607176   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetConfigRaw
	I1202 11:47:08.607688   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:08.609787   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.610122   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.610140   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.610348   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:47:08.610507   23379 start.go:128] duration metric: took 28.633177558s to createHost
	I1202 11:47:08.610528   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.612576   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.612933   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.612958   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.613094   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.613256   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.613387   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.613495   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.613675   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:08.613819   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:08.613829   23379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:47:08.721072   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733140028.701362667
	
	I1202 11:47:08.721095   23379 fix.go:216] guest clock: 1733140028.701362667
	I1202 11:47:08.721104   23379 fix.go:229] Guest: 2024-12-02 11:47:08.701362667 +0000 UTC Remote: 2024-12-02 11:47:08.610518479 +0000 UTC m=+77.169276420 (delta=90.844188ms)
	I1202 11:47:08.721123   23379 fix.go:200] guest clock delta is within tolerance: 90.844188ms
	I1202 11:47:08.721129   23379 start.go:83] releasing machines lock for "ha-604935-m02", held for 28.743964366s
	I1202 11:47:08.721146   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.721362   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:08.723610   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.723892   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.723917   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.725920   23379 out.go:177] * Found network options:
	I1202 11:47:08.727151   23379 out.go:177]   - NO_PROXY=192.168.39.102
	W1202 11:47:08.728253   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:47:08.728295   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.728718   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.728888   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.728964   23379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:47:08.729018   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	W1202 11:47:08.729077   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:47:08.729140   23379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:47:08.729159   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.731377   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.731690   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.731736   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.731757   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.731905   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.732089   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.732138   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.732161   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.732263   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.732335   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.732412   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.732482   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.732622   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.732772   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.961089   23379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:47:08.967388   23379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:47:08.967456   23379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:47:08.983898   23379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:47:08.983919   23379 start.go:495] detecting cgroup driver to use...
	I1202 11:47:08.983976   23379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:47:08.999755   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:47:09.012969   23379 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:47:09.013013   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:47:09.025774   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:47:09.038595   23379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:47:09.155525   23379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:47:09.315590   23379 docker.go:233] disabling docker service ...
	I1202 11:47:09.315645   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:47:09.329428   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:47:09.341852   23379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:47:09.455987   23379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:47:09.568119   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:47:09.581349   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:47:09.599069   23379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:47:09.599131   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.609102   23379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:47:09.609172   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.619619   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.629809   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.640881   23379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:47:09.650894   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.660662   23379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.676866   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.687794   23379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:47:09.696987   23379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:47:09.697035   23379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:47:09.709512   23379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:47:09.718617   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:47:09.833443   23379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:47:09.924039   23379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:47:09.924108   23379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:47:09.929102   23379 start.go:563] Will wait 60s for crictl version
	I1202 11:47:09.929151   23379 ssh_runner.go:195] Run: which crictl
	I1202 11:47:09.932909   23379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:47:09.970799   23379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:47:09.970857   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:47:09.997925   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:47:10.026009   23379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:47:10.027185   23379 out.go:177]   - env NO_PROXY=192.168.39.102
	I1202 11:47:10.028209   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:10.030558   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:10.030843   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:10.030865   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:10.031081   23379 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:47:10.034913   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:47:10.046993   23379 mustload.go:65] Loading cluster: ha-604935
	I1202 11:47:10.047168   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:10.047464   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:10.047509   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:10.061535   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39823
	I1202 11:47:10.061962   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:10.062500   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:10.062519   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:10.062832   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:10.062993   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:47:10.064396   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:47:10.064646   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:10.064674   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:10.078237   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I1202 11:47:10.078536   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:10.078918   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:10.078933   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:10.079205   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:10.079368   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:47:10.079517   23379 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.96
	I1202 11:47:10.079528   23379 certs.go:194] generating shared ca certs ...
	I1202 11:47:10.079548   23379 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:47:10.079686   23379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:47:10.079733   23379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:47:10.079746   23379 certs.go:256] generating profile certs ...
	I1202 11:47:10.079838   23379 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:47:10.079869   23379 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3
	I1202 11:47:10.079889   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.96 192.168.39.254]
	I1202 11:47:10.265166   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3 ...
	I1202 11:47:10.265189   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3: {Name:mkdd0b8b1421fc39bdc7a4c81c195bce0584f3e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:47:10.265365   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3 ...
	I1202 11:47:10.265383   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3: {Name:mk317f3cb02e9fefc92b2802c6865b7da9a08a71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:47:10.265473   23379 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:47:10.265636   23379 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:47:10.265813   23379 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:47:10.265832   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:47:10.265850   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:47:10.265871   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:47:10.265888   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:47:10.265904   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:47:10.265920   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:47:10.265936   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:47:10.265955   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:47:10.266021   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:47:10.266059   23379 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:47:10.266073   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:47:10.266106   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:47:10.266137   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:47:10.266166   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:47:10.266222   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:47:10.266260   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.266282   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.266301   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.266341   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:47:10.268885   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:10.269241   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:47:10.269271   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:10.269395   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:47:10.269566   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:47:10.269669   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:47:10.269777   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:47:10.344538   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 11:47:10.349538   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 11:47:10.360402   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 11:47:10.364479   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 11:47:10.374445   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 11:47:10.378811   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 11:47:10.389170   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 11:47:10.392986   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1202 11:47:10.403485   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 11:47:10.408617   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 11:47:10.418394   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 11:47:10.422245   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 11:47:10.432316   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:47:10.458960   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:47:10.483156   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:47:10.505724   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:47:10.527955   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1202 11:47:10.550812   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 11:47:10.573508   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:47:10.595760   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:47:10.618337   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:47:10.641184   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:47:10.663681   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:47:10.687678   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 11:47:10.703651   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 11:47:10.719297   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 11:47:10.734755   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1202 11:47:10.751060   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 11:47:10.767295   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 11:47:10.783201   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 11:47:10.798776   23379 ssh_runner.go:195] Run: openssl version
	I1202 11:47:10.804781   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:47:10.814853   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.819107   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.819150   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.824680   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:47:10.834444   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:47:10.847333   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.852096   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.852141   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.857456   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:47:10.867671   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:47:10.878797   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.883014   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.883050   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.888463   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:47:10.900014   23379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:47:10.903987   23379 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:47:10.904033   23379 kubeadm.go:934] updating node {m02 192.168.39.96 8443 v1.31.2 crio true true} ...
	I1202 11:47:10.904108   23379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:47:10.904143   23379 kube-vip.go:115] generating kube-vip config ...
	I1202 11:47:10.904172   23379 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:47:10.920663   23379 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:47:10.920727   23379 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:47:10.920782   23379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:47:10.929813   23379 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1202 11:47:10.929869   23379 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1202 11:47:10.938939   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1202 11:47:10.938963   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:47:10.939004   23379 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1202 11:47:10.939023   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:47:10.939098   23379 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1202 11:47:10.943516   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1202 11:47:10.943543   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1202 11:47:11.580278   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:47:11.580378   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:47:11.585380   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1202 11:47:11.585410   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1202 11:47:11.699996   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:47:11.746001   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:47:11.746098   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:47:11.755160   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1202 11:47:11.755193   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1202 11:47:12.167193   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 11:47:12.177362   23379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1202 11:47:12.193477   23379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:47:12.209277   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1202 11:47:12.225224   23379 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:47:12.229096   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:47:12.241465   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:47:12.355965   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:47:12.372721   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:47:12.373199   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:12.373246   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:12.387521   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I1202 11:47:12.387950   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:12.388471   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:12.388495   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:12.388817   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:12.389008   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:47:12.389136   23379 start.go:317] joinCluster: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:47:12.389250   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1202 11:47:12.389272   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:47:12.391559   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:12.391918   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:47:12.391947   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:12.392078   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:47:12.392244   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:47:12.392404   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:47:12.392523   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:47:12.542455   23379 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:47:12.542510   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 781q3h.dri7zuf7dlr9vool --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m02 --control-plane --apiserver-advertise-address=192.168.39.96 --apiserver-bind-port=8443"
	I1202 11:47:33.298276   23379 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 781q3h.dri7zuf7dlr9vool --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m02 --control-plane --apiserver-advertise-address=192.168.39.96 --apiserver-bind-port=8443": (20.75572497s)
	I1202 11:47:33.298324   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1202 11:47:33.868140   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-604935-m02 minikube.k8s.io/updated_at=2024_12_02T11_47_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=ha-604935 minikube.k8s.io/primary=false
	I1202 11:47:34.014505   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-604935-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1202 11:47:34.151913   23379 start.go:319] duration metric: took 21.762775302s to joinCluster
	I1202 11:47:34.151988   23379 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:47:34.152289   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:34.153405   23379 out.go:177] * Verifying Kubernetes components...
	I1202 11:47:34.154583   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:47:34.458218   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:47:34.537753   23379 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:47:34.537985   23379 kapi.go:59] client config for ha-604935: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 11:47:34.538049   23379 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1202 11:47:34.538237   23379 node_ready.go:35] waiting up to 6m0s for node "ha-604935-m02" to be "Ready" ...
	I1202 11:47:34.538328   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:34.538338   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:34.538353   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:34.538361   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:34.553164   23379 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1202 11:47:35.038636   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:35.038655   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:35.038663   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:35.038667   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:35.043410   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:35.539240   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:35.539268   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:35.539288   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:35.539295   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:35.543768   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:36.038477   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:36.038500   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:36.038510   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:36.038514   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:36.044852   23379 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1202 11:47:36.539264   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:36.539282   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:36.539291   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:36.539294   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:36.541884   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:36.542608   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:37.039323   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:37.039344   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:37.039355   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:37.039363   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:37.042762   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:37.539267   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:37.539288   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:37.539298   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:37.539302   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:37.542085   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:38.039187   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:38.039205   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:38.039213   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:38.039217   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:38.042510   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:38.538564   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:38.538590   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:38.538602   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:38.538607   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:38.543229   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:38.543842   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:39.039431   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:39.039454   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:39.039465   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:39.039470   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:39.043101   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:39.538521   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:39.538548   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:39.538559   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:39.538565   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:39.544151   23379 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:47:40.039125   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:40.039142   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:40.039150   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:40.039155   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:40.041928   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:40.539447   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:40.539466   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:40.539477   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:40.539482   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:40.542088   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:41.039165   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:41.039194   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:41.039206   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:41.039214   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:41.042019   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:41.042646   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:41.538430   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:41.538449   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:41.538456   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:41.538460   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:41.541300   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:42.038543   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:42.038564   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:42.038574   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:42.038579   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:42.042807   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:42.539123   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:42.539144   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:42.539155   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:42.539168   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:42.615775   23379 round_trippers.go:574] Response Status: 200 OK in 76 milliseconds
	I1202 11:47:43.038628   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:43.038651   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:43.038660   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:43.038670   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:43.041582   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:43.538519   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:43.538548   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:43.538559   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:43.538566   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:43.542876   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:43.543448   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:44.038473   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:44.038493   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:44.038501   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:44.038506   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:44.041916   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:44.538909   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:44.538934   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:44.538946   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:44.538954   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:44.542475   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:45.039019   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:45.039039   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:45.039046   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:45.039050   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:45.042662   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:45.539381   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:45.539404   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:45.539414   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:45.539419   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:45.543229   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:45.544177   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:46.038600   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:46.038622   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:46.038630   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:46.038635   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:46.041460   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:46.538597   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:46.538618   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:46.538628   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:46.538632   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:46.541444   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:47.038797   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:47.038817   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:47.038825   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:47.038828   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:47.041962   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:47.539440   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:47.539463   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:47.539470   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:47.539474   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:47.543115   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:48.039282   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:48.039306   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:48.039316   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:48.039320   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:48.042491   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:48.043162   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:48.539348   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:48.539372   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:48.539382   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:48.539387   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:48.542583   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:49.038466   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:49.038485   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.038493   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.038498   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.041480   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.539130   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:49.539151   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.539162   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.539166   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.542870   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:49.543570   23379 node_ready.go:49] node "ha-604935-m02" has status "Ready":"True"
	I1202 11:47:49.543589   23379 node_ready.go:38] duration metric: took 15.005336835s for node "ha-604935-m02" to be "Ready" ...
	I1202 11:47:49.543598   23379 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:47:49.543686   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:49.543695   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.543702   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.543707   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.548022   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:49.557050   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.557145   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5gcc2
	I1202 11:47:49.557159   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.557169   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.557181   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.561541   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:49.562194   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:49.562212   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.562222   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.562229   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.564378   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.564821   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:49.564836   23379 pod_ready.go:82] duration metric: took 7.7579ms for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.564845   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.564897   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-g48q9
	I1202 11:47:49.564905   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.564912   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.564919   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.566980   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.567489   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:49.567501   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.567509   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.567514   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.569545   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.570321   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:49.570337   23379 pod_ready.go:82] duration metric: took 5.482367ms for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.570346   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.570395   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935
	I1202 11:47:49.570402   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.570408   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.570416   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.572224   23379 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:47:49.572830   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:49.572845   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.572852   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.572856   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.574847   23379 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:47:49.575387   23379 pod_ready.go:93] pod "etcd-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:49.575407   23379 pod_ready.go:82] duration metric: took 5.05521ms for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.575417   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.575471   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:49.575482   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.575492   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.575497   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.577559   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.578025   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:49.578036   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.578042   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.578046   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.580244   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:50.075930   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:50.075955   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.075967   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.075972   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.078932   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:50.079644   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:50.079660   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.079671   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.079679   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.083049   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:50.576373   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:50.576396   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.576404   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.576408   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.579581   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:50.580413   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:50.580428   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.580435   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.580439   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.582674   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.075671   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:51.075692   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.075700   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.075705   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.080547   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:51.081109   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:51.081140   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.081151   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.081159   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.083775   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.084570   23379 pod_ready.go:93] pod "etcd-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.084587   23379 pod_ready.go:82] duration metric: took 1.509162413s for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.084605   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.084654   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935
	I1202 11:47:51.084661   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.084668   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.084676   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.086997   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.139895   23379 request.go:632] Waited for 52.198749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.139936   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.139941   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.139948   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.139954   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.142459   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.143143   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.143164   23379 pod_ready.go:82] duration metric: took 58.549955ms for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.143176   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.339592   23379 request.go:632] Waited for 196.342057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:47:51.339640   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:47:51.339648   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.339657   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.339665   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.342939   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:51.539862   23379 request.go:632] Waited for 196.164588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:51.539931   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:51.539935   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.539943   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.539950   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.543209   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:51.543865   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.543882   23379 pod_ready.go:82] duration metric: took 400.698772ms for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.543892   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.739144   23379 request.go:632] Waited for 195.19473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:47:51.739219   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:47:51.739235   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.739245   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.739249   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.741900   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.940184   23379 request.go:632] Waited for 197.361013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.940269   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.940278   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.940285   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.940289   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.943128   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.943706   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.943727   23379 pod_ready.go:82] duration metric: took 399.828238ms for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.943741   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.139832   23379 request.go:632] Waited for 196.024828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:47:52.139897   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:47:52.139908   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.139915   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.139922   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.143273   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:52.339296   23379 request.go:632] Waited for 195.254025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:52.339366   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:52.339382   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.339392   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.339396   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.343086   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:52.343632   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:52.343651   23379 pod_ready.go:82] duration metric: took 399.901549ms for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.343664   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.540119   23379 request.go:632] Waited for 196.382954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:47:52.540208   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:47:52.540223   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.540246   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.540254   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.544789   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:52.739964   23379 request.go:632] Waited for 194.383281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:52.740029   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:52.740036   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.740047   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.740056   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.744675   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:52.745274   23379 pod_ready.go:93] pod "kube-proxy-tqcb6" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:52.745291   23379 pod_ready.go:82] duration metric: took 401.620034ms for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.745302   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.939398   23379 request.go:632] Waited for 194.014981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:47:52.939448   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:47:52.939453   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.939460   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.939466   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.942473   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:53.139562   23379 request.go:632] Waited for 196.368019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.139626   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.139631   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.139639   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.139642   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.142786   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.143361   23379 pod_ready.go:93] pod "kube-proxy-w9r4x" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:53.143382   23379 pod_ready.go:82] duration metric: took 398.068666ms for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.143391   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.339501   23379 request.go:632] Waited for 196.04496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:47:53.339586   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:47:53.339596   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.339607   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.339618   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.343080   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.540159   23379 request.go:632] Waited for 196.184742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:53.540226   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:53.540246   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.540255   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.540261   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.543534   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.544454   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:53.544479   23379 pod_ready.go:82] duration metric: took 401.077052ms for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.544494   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.739453   23379 request.go:632] Waited for 194.878612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:47:53.739540   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:47:53.739557   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.739572   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.739583   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.743318   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.939180   23379 request.go:632] Waited for 195.280753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.939245   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.939250   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.939258   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.939265   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.943381   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:53.944067   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:53.944085   23379 pod_ready.go:82] duration metric: took 399.577551ms for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.944099   23379 pod_ready.go:39] duration metric: took 4.40047197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:47:53.944119   23379 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:47:53.944173   23379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:47:53.960762   23379 api_server.go:72] duration metric: took 19.808744771s to wait for apiserver process to appear ...
	I1202 11:47:53.960781   23379 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:47:53.960802   23379 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1202 11:47:53.965634   23379 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1202 11:47:53.965695   23379 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1202 11:47:53.965706   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.965717   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.965727   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.966539   23379 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1202 11:47:53.966644   23379 api_server.go:141] control plane version: v1.31.2
	I1202 11:47:53.966664   23379 api_server.go:131] duration metric: took 5.87665ms to wait for apiserver health ...
	I1202 11:47:53.966674   23379 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:47:54.140116   23379 request.go:632] Waited for 173.370822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.140184   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.140192   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.140203   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.140213   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.144688   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:54.150151   23379 system_pods.go:59] 17 kube-system pods found
	I1202 11:47:54.150175   23379 system_pods.go:61] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:47:54.150180   23379 system_pods.go:61] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:47:54.150184   23379 system_pods.go:61] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:47:54.150187   23379 system_pods.go:61] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:47:54.150190   23379 system_pods.go:61] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:47:54.150193   23379 system_pods.go:61] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:47:54.150196   23379 system_pods.go:61] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:47:54.150200   23379 system_pods.go:61] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:47:54.150204   23379 system_pods.go:61] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:47:54.150208   23379 system_pods.go:61] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:47:54.150213   23379 system_pods.go:61] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:47:54.150216   23379 system_pods.go:61] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:47:54.150222   23379 system_pods.go:61] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:47:54.150225   23379 system_pods.go:61] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:47:54.150228   23379 system_pods.go:61] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:47:54.150230   23379 system_pods.go:61] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:47:54.150234   23379 system_pods.go:61] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:47:54.150239   23379 system_pods.go:74] duration metric: took 183.556674ms to wait for pod list to return data ...
	I1202 11:47:54.150248   23379 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:47:54.339686   23379 request.go:632] Waited for 189.36849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:47:54.339740   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:47:54.339744   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.339751   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.339755   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.343135   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:54.343361   23379 default_sa.go:45] found service account: "default"
	I1202 11:47:54.343386   23379 default_sa.go:55] duration metric: took 193.131705ms for default service account to be created ...
	I1202 11:47:54.343397   23379 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:47:54.539835   23379 request.go:632] Waited for 196.371965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.539931   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.539943   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.539954   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.539964   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.544943   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:54.550739   23379 system_pods.go:86] 17 kube-system pods found
	I1202 11:47:54.550763   23379 system_pods.go:89] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:47:54.550769   23379 system_pods.go:89] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:47:54.550775   23379 system_pods.go:89] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:47:54.550778   23379 system_pods.go:89] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:47:54.550809   23379 system_pods.go:89] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:47:54.550819   23379 system_pods.go:89] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:47:54.550824   23379 system_pods.go:89] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:47:54.550829   23379 system_pods.go:89] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:47:54.550833   23379 system_pods.go:89] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:47:54.550837   23379 system_pods.go:89] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:47:54.550841   23379 system_pods.go:89] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:47:54.550848   23379 system_pods.go:89] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:47:54.550852   23379 system_pods.go:89] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:47:54.550857   23379 system_pods.go:89] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:47:54.550862   23379 system_pods.go:89] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:47:54.550867   23379 system_pods.go:89] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:47:54.550870   23379 system_pods.go:89] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:47:54.550878   23379 system_pods.go:126] duration metric: took 207.476252ms to wait for k8s-apps to be running ...
	I1202 11:47:54.550887   23379 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:47:54.550927   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:47:54.567143   23379 system_svc.go:56] duration metric: took 16.250371ms WaitForService to wait for kubelet
	I1202 11:47:54.567163   23379 kubeadm.go:582] duration metric: took 20.415147049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:47:54.567180   23379 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:47:54.739589   23379 request.go:632] Waited for 172.338353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1202 11:47:54.739668   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1202 11:47:54.739675   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.739683   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.739688   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.743346   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:54.744125   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:47:54.744152   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:47:54.744165   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:47:54.744170   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:47:54.744177   23379 node_conditions.go:105] duration metric: took 176.990456ms to run NodePressure ...
	I1202 11:47:54.744190   23379 start.go:241] waiting for startup goroutines ...
	I1202 11:47:54.744223   23379 start.go:255] writing updated cluster config ...
	I1202 11:47:54.746253   23379 out.go:201] 
	I1202 11:47:54.747593   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:54.747718   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:47:54.749358   23379 out.go:177] * Starting "ha-604935-m03" control-plane node in "ha-604935" cluster
	I1202 11:47:54.750410   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:47:54.750433   23379 cache.go:56] Caching tarball of preloaded images
	I1202 11:47:54.750533   23379 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:47:54.750548   23379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:47:54.750643   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:47:54.750878   23379 start.go:360] acquireMachinesLock for ha-604935-m03: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:47:54.750923   23379 start.go:364] duration metric: took 26.206µs to acquireMachinesLock for "ha-604935-m03"
	I1202 11:47:54.750944   23379 start.go:93] Provisioning new machine with config: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:47:54.751067   23379 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1202 11:47:54.752864   23379 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 11:47:54.752946   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:54.752986   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:54.767584   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I1202 11:47:54.767916   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:54.768481   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:54.768505   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:54.768819   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:54.768991   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:47:54.769125   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:47:54.769335   23379 start.go:159] libmachine.API.Create for "ha-604935" (driver="kvm2")
	I1202 11:47:54.769376   23379 client.go:168] LocalClient.Create starting
	I1202 11:47:54.769409   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:47:54.769445   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:47:54.769469   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:47:54.769535   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:47:54.769563   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:47:54.769581   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:47:54.769610   23379 main.go:141] libmachine: Running pre-create checks...
	I1202 11:47:54.769622   23379 main.go:141] libmachine: (ha-604935-m03) Calling .PreCreateCheck
	I1202 11:47:54.769820   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetConfigRaw
	I1202 11:47:54.770184   23379 main.go:141] libmachine: Creating machine...
	I1202 11:47:54.770198   23379 main.go:141] libmachine: (ha-604935-m03) Calling .Create
	I1202 11:47:54.770317   23379 main.go:141] libmachine: (ha-604935-m03) Creating KVM machine...
	I1202 11:47:54.771476   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found existing default KVM network
	I1202 11:47:54.771588   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found existing private KVM network mk-ha-604935
	I1202 11:47:54.771715   23379 main.go:141] libmachine: (ha-604935-m03) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03 ...
	I1202 11:47:54.771731   23379 main.go:141] libmachine: (ha-604935-m03) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:47:54.771824   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:54.771717   24139 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:47:54.771925   23379 main.go:141] libmachine: (ha-604935-m03) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:47:55.025734   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:55.025618   24139 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa...
	I1202 11:47:55.125359   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:55.125265   24139 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/ha-604935-m03.rawdisk...
	I1202 11:47:55.125386   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Writing magic tar header
	I1202 11:47:55.125397   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Writing SSH key tar header
	I1202 11:47:55.125407   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:55.125384   24139 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03 ...
	I1202 11:47:55.125541   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03
	I1202 11:47:55.125572   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:47:55.125586   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03 (perms=drwx------)
	I1202 11:47:55.125605   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:47:55.125622   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:47:55.125634   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:47:55.125649   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:47:55.125663   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:47:55.125683   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:47:55.125697   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:47:55.125710   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:47:55.125719   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home
	I1202 11:47:55.125733   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:47:55.125745   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Skipping /home - not owner
	I1202 11:47:55.125754   23379 main.go:141] libmachine: (ha-604935-m03) Creating domain...
	I1202 11:47:55.126629   23379 main.go:141] libmachine: (ha-604935-m03) define libvirt domain using xml: 
	I1202 11:47:55.126649   23379 main.go:141] libmachine: (ha-604935-m03) <domain type='kvm'>
	I1202 11:47:55.126659   23379 main.go:141] libmachine: (ha-604935-m03)   <name>ha-604935-m03</name>
	I1202 11:47:55.126667   23379 main.go:141] libmachine: (ha-604935-m03)   <memory unit='MiB'>2200</memory>
	I1202 11:47:55.126675   23379 main.go:141] libmachine: (ha-604935-m03)   <vcpu>2</vcpu>
	I1202 11:47:55.126685   23379 main.go:141] libmachine: (ha-604935-m03)   <features>
	I1202 11:47:55.126693   23379 main.go:141] libmachine: (ha-604935-m03)     <acpi/>
	I1202 11:47:55.126701   23379 main.go:141] libmachine: (ha-604935-m03)     <apic/>
	I1202 11:47:55.126706   23379 main.go:141] libmachine: (ha-604935-m03)     <pae/>
	I1202 11:47:55.126709   23379 main.go:141] libmachine: (ha-604935-m03)     
	I1202 11:47:55.126714   23379 main.go:141] libmachine: (ha-604935-m03)   </features>
	I1202 11:47:55.126721   23379 main.go:141] libmachine: (ha-604935-m03)   <cpu mode='host-passthrough'>
	I1202 11:47:55.126745   23379 main.go:141] libmachine: (ha-604935-m03)   
	I1202 11:47:55.126763   23379 main.go:141] libmachine: (ha-604935-m03)   </cpu>
	I1202 11:47:55.126773   23379 main.go:141] libmachine: (ha-604935-m03)   <os>
	I1202 11:47:55.126780   23379 main.go:141] libmachine: (ha-604935-m03)     <type>hvm</type>
	I1202 11:47:55.126791   23379 main.go:141] libmachine: (ha-604935-m03)     <boot dev='cdrom'/>
	I1202 11:47:55.126796   23379 main.go:141] libmachine: (ha-604935-m03)     <boot dev='hd'/>
	I1202 11:47:55.126808   23379 main.go:141] libmachine: (ha-604935-m03)     <bootmenu enable='no'/>
	I1202 11:47:55.126817   23379 main.go:141] libmachine: (ha-604935-m03)   </os>
	I1202 11:47:55.126827   23379 main.go:141] libmachine: (ha-604935-m03)   <devices>
	I1202 11:47:55.126837   23379 main.go:141] libmachine: (ha-604935-m03)     <disk type='file' device='cdrom'>
	I1202 11:47:55.126849   23379 main.go:141] libmachine: (ha-604935-m03)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/boot2docker.iso'/>
	I1202 11:47:55.126860   23379 main.go:141] libmachine: (ha-604935-m03)       <target dev='hdc' bus='scsi'/>
	I1202 11:47:55.126869   23379 main.go:141] libmachine: (ha-604935-m03)       <readonly/>
	I1202 11:47:55.126878   23379 main.go:141] libmachine: (ha-604935-m03)     </disk>
	I1202 11:47:55.126888   23379 main.go:141] libmachine: (ha-604935-m03)     <disk type='file' device='disk'>
	I1202 11:47:55.126904   23379 main.go:141] libmachine: (ha-604935-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:47:55.126929   23379 main.go:141] libmachine: (ha-604935-m03)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/ha-604935-m03.rawdisk'/>
	I1202 11:47:55.126949   23379 main.go:141] libmachine: (ha-604935-m03)       <target dev='hda' bus='virtio'/>
	I1202 11:47:55.126958   23379 main.go:141] libmachine: (ha-604935-m03)     </disk>
	I1202 11:47:55.126972   23379 main.go:141] libmachine: (ha-604935-m03)     <interface type='network'>
	I1202 11:47:55.126984   23379 main.go:141] libmachine: (ha-604935-m03)       <source network='mk-ha-604935'/>
	I1202 11:47:55.126990   23379 main.go:141] libmachine: (ha-604935-m03)       <model type='virtio'/>
	I1202 11:47:55.127001   23379 main.go:141] libmachine: (ha-604935-m03)     </interface>
	I1202 11:47:55.127011   23379 main.go:141] libmachine: (ha-604935-m03)     <interface type='network'>
	I1202 11:47:55.127022   23379 main.go:141] libmachine: (ha-604935-m03)       <source network='default'/>
	I1202 11:47:55.127039   23379 main.go:141] libmachine: (ha-604935-m03)       <model type='virtio'/>
	I1202 11:47:55.127046   23379 main.go:141] libmachine: (ha-604935-m03)     </interface>
	I1202 11:47:55.127054   23379 main.go:141] libmachine: (ha-604935-m03)     <serial type='pty'>
	I1202 11:47:55.127059   23379 main.go:141] libmachine: (ha-604935-m03)       <target port='0'/>
	I1202 11:47:55.127065   23379 main.go:141] libmachine: (ha-604935-m03)     </serial>
	I1202 11:47:55.127070   23379 main.go:141] libmachine: (ha-604935-m03)     <console type='pty'>
	I1202 11:47:55.127080   23379 main.go:141] libmachine: (ha-604935-m03)       <target type='serial' port='0'/>
	I1202 11:47:55.127089   23379 main.go:141] libmachine: (ha-604935-m03)     </console>
	I1202 11:47:55.127100   23379 main.go:141] libmachine: (ha-604935-m03)     <rng model='virtio'>
	I1202 11:47:55.127112   23379 main.go:141] libmachine: (ha-604935-m03)       <backend model='random'>/dev/random</backend>
	I1202 11:47:55.127125   23379 main.go:141] libmachine: (ha-604935-m03)     </rng>
	I1202 11:47:55.127130   23379 main.go:141] libmachine: (ha-604935-m03)     
	I1202 11:47:55.127136   23379 main.go:141] libmachine: (ha-604935-m03)     
	I1202 11:47:55.127141   23379 main.go:141] libmachine: (ha-604935-m03)   </devices>
	I1202 11:47:55.127147   23379 main.go:141] libmachine: (ha-604935-m03) </domain>
	I1202 11:47:55.127154   23379 main.go:141] libmachine: (ha-604935-m03) 
	I1202 11:47:55.134362   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:04:31:c3 in network default
	I1202 11:47:55.134940   23379 main.go:141] libmachine: (ha-604935-m03) Ensuring networks are active...
	I1202 11:47:55.134970   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:55.135700   23379 main.go:141] libmachine: (ha-604935-m03) Ensuring network default is active
	I1202 11:47:55.135994   23379 main.go:141] libmachine: (ha-604935-m03) Ensuring network mk-ha-604935 is active
	I1202 11:47:55.136395   23379 main.go:141] libmachine: (ha-604935-m03) Getting domain xml...
	I1202 11:47:55.137154   23379 main.go:141] libmachine: (ha-604935-m03) Creating domain...
	I1202 11:47:56.327343   23379 main.go:141] libmachine: (ha-604935-m03) Waiting to get IP...
	I1202 11:47:56.328051   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:56.328532   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:56.328560   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:56.328490   24139 retry.go:31] will retry after 245.534512ms: waiting for machine to come up
	I1202 11:47:56.575853   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:56.576344   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:56.576361   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:56.576322   24139 retry.go:31] will retry after 318.961959ms: waiting for machine to come up
	I1202 11:47:56.897058   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:56.897590   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:56.897617   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:56.897539   24139 retry.go:31] will retry after 408.54179ms: waiting for machine to come up
	I1202 11:47:57.308040   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:57.308434   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:57.308462   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:57.308386   24139 retry.go:31] will retry after 402.803745ms: waiting for machine to come up
	I1202 11:47:57.713046   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:57.713543   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:57.713570   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:57.713486   24139 retry.go:31] will retry after 579.226055ms: waiting for machine to come up
	I1202 11:47:58.294078   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:58.294470   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:58.294499   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:58.294431   24139 retry.go:31] will retry after 896.930274ms: waiting for machine to come up
	I1202 11:47:59.192283   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:59.192647   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:59.192676   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:59.192594   24139 retry.go:31] will retry after 885.008169ms: waiting for machine to come up
	I1202 11:48:00.078944   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:00.079402   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:00.079429   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:00.079369   24139 retry.go:31] will retry after 1.252859053s: waiting for machine to come up
	I1202 11:48:01.333237   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:01.333651   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:01.333686   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:01.333595   24139 retry.go:31] will retry after 1.614324315s: waiting for machine to come up
	I1202 11:48:02.949128   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:02.949536   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:02.949565   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:02.949508   24139 retry.go:31] will retry after 1.812710836s: waiting for machine to come up
	I1202 11:48:04.763946   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:04.764375   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:04.764406   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:04.764323   24139 retry.go:31] will retry after 2.067204627s: waiting for machine to come up
	I1202 11:48:06.833288   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:06.833665   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:06.833688   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:06.833637   24139 retry.go:31] will retry after 2.307525128s: waiting for machine to come up
	I1202 11:48:09.144169   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:09.144572   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:09.144593   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:09.144528   24139 retry.go:31] will retry after 3.498536479s: waiting for machine to come up
	I1202 11:48:12.646257   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:12.646634   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:12.646662   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:12.646585   24139 retry.go:31] will retry after 4.180840958s: waiting for machine to come up
	I1202 11:48:16.830266   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.830741   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has current primary IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.830768   23379 main.go:141] libmachine: (ha-604935-m03) Found IP for machine: 192.168.39.211
	I1202 11:48:16.830807   23379 main.go:141] libmachine: (ha-604935-m03) Reserving static IP address...
	I1202 11:48:16.831141   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find host DHCP lease matching {name: "ha-604935-m03", mac: "52:54:00:56:c4:59", ip: "192.168.39.211"} in network mk-ha-604935
	I1202 11:48:16.902131   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Getting to WaitForSSH function...
	I1202 11:48:16.902164   23379 main.go:141] libmachine: (ha-604935-m03) Reserved static IP address: 192.168.39.211
	I1202 11:48:16.902173   23379 main.go:141] libmachine: (ha-604935-m03) Waiting for SSH to be available...
	I1202 11:48:16.905075   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.905526   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:16.905551   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.905741   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Using SSH client type: external
	I1202 11:48:16.905772   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa (-rw-------)
	I1202 11:48:16.905800   23379 main.go:141] libmachine: (ha-604935-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:48:16.905820   23379 main.go:141] libmachine: (ha-604935-m03) DBG | About to run SSH command:
	I1202 11:48:16.905851   23379 main.go:141] libmachine: (ha-604935-m03) DBG | exit 0
	I1202 11:48:17.032533   23379 main.go:141] libmachine: (ha-604935-m03) DBG | SSH cmd err, output: <nil>: 
	I1202 11:48:17.032776   23379 main.go:141] libmachine: (ha-604935-m03) KVM machine creation complete!
	I1202 11:48:17.033131   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetConfigRaw
	I1202 11:48:17.033671   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:17.033865   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:17.034018   23379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:48:17.034033   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetState
	I1202 11:48:17.035293   23379 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:48:17.035305   23379 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:48:17.035310   23379 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:48:17.035315   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.037352   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.037741   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.037774   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.037900   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.038083   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.038238   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.038381   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.038530   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.038713   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.038724   23379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:48:17.143327   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:48:17.143352   23379 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:48:17.143372   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.146175   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.146516   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.146548   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.146646   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.146838   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.146983   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.147108   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.147258   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.147425   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.147438   23379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:48:17.253131   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:48:17.253218   23379 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:48:17.253233   23379 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:48:17.253245   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:48:17.253510   23379 buildroot.go:166] provisioning hostname "ha-604935-m03"
	I1202 11:48:17.253537   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:48:17.253707   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.256428   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.256774   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.256796   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.256946   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.257116   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.257249   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.257377   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.257504   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.257691   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.257703   23379 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935-m03 && echo "ha-604935-m03" | sudo tee /etc/hostname
	I1202 11:48:17.375185   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935-m03
	
	I1202 11:48:17.375210   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.377667   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.378038   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.378062   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.378264   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.378483   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.378634   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.378780   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.378929   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.379106   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.379136   23379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:48:17.496248   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:48:17.496279   23379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:48:17.496297   23379 buildroot.go:174] setting up certificates
	I1202 11:48:17.496309   23379 provision.go:84] configureAuth start
	I1202 11:48:17.496322   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:48:17.496560   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:17.499486   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.499912   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.499947   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.500094   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.502337   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.502712   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.502737   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.502856   23379 provision.go:143] copyHostCerts
	I1202 11:48:17.502886   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:48:17.502931   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:48:17.502944   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:48:17.503023   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:48:17.503097   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:48:17.503116   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:48:17.503123   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:48:17.503148   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:48:17.503191   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:48:17.503207   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:48:17.503214   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:48:17.503234   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:48:17.503299   23379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935-m03 san=[127.0.0.1 192.168.39.211 ha-604935-m03 localhost minikube]
	I1202 11:48:17.587852   23379 provision.go:177] copyRemoteCerts
	I1202 11:48:17.587906   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:48:17.587927   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.590598   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.590995   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.591015   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.591197   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.591367   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.591543   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.591679   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:17.674221   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:48:17.674296   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:48:17.698597   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:48:17.698660   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:48:17.723039   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:48:17.723097   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:48:17.747396   23379 provision.go:87] duration metric: took 251.076751ms to configureAuth
	I1202 11:48:17.747416   23379 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:48:17.747635   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:48:17.747715   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.750670   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.751052   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.751081   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.751262   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.751452   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.751599   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.751748   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.751905   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.752098   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.752117   23379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:48:17.976945   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:48:17.976975   23379 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:48:17.976987   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetURL
	I1202 11:48:17.978227   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Using libvirt version 6000000
	I1202 11:48:17.980581   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.980959   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.980987   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.981117   23379 main.go:141] libmachine: Docker is up and running!
	I1202 11:48:17.981135   23379 main.go:141] libmachine: Reticulating splines...
	I1202 11:48:17.981143   23379 client.go:171] duration metric: took 23.211756514s to LocalClient.Create
	I1202 11:48:17.981168   23379 start.go:167] duration metric: took 23.211833697s to libmachine.API.Create "ha-604935"
	I1202 11:48:17.981181   23379 start.go:293] postStartSetup for "ha-604935-m03" (driver="kvm2")
	I1202 11:48:17.981196   23379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:48:17.981223   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:17.981429   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:48:17.981453   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.983470   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.983816   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.983841   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.983966   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.984144   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.984312   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.984449   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:18.067334   23379 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:48:18.072037   23379 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:48:18.072060   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:48:18.072140   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:48:18.072226   23379 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:48:18.072251   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:48:18.072352   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:48:18.083182   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:48:18.110045   23379 start.go:296] duration metric: took 128.848906ms for postStartSetup
	I1202 11:48:18.110090   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetConfigRaw
	I1202 11:48:18.110693   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:18.113273   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.113636   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.113656   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.113891   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:48:18.114175   23379 start.go:128] duration metric: took 23.363096022s to createHost
	I1202 11:48:18.114201   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:18.116660   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.116982   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.117010   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.117166   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:18.117378   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.117545   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.117689   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:18.117845   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:18.118040   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:18.118051   23379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:48:18.225174   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733140098.198364061
	
	I1202 11:48:18.225197   23379 fix.go:216] guest clock: 1733140098.198364061
	I1202 11:48:18.225206   23379 fix.go:229] Guest: 2024-12-02 11:48:18.198364061 +0000 UTC Remote: 2024-12-02 11:48:18.114189112 +0000 UTC m=+146.672947053 (delta=84.174949ms)
	I1202 11:48:18.225226   23379 fix.go:200] guest clock delta is within tolerance: 84.174949ms
	I1202 11:48:18.225232   23379 start.go:83] releasing machines lock for "ha-604935-m03", held for 23.474299783s
	I1202 11:48:18.225255   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.225523   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:18.228223   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.228665   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.228698   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.231057   23379 out.go:177] * Found network options:
	I1202 11:48:18.232381   23379 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.96
	W1202 11:48:18.233581   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	W1202 11:48:18.233602   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:48:18.233614   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.234079   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.234244   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.234317   23379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:48:18.234369   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	W1202 11:48:18.234421   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	W1202 11:48:18.234435   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:48:18.234477   23379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:48:18.234492   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:18.237268   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.237547   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.237709   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.237734   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.237883   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:18.237989   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.238016   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.238057   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.238152   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:18.238220   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:18.238300   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.238378   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:18.238455   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:18.238579   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:18.473317   23379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:48:18.479920   23379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:48:18.479984   23379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:48:18.496983   23379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:48:18.497001   23379 start.go:495] detecting cgroup driver to use...
	I1202 11:48:18.497065   23379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:48:18.513241   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:48:18.527410   23379 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:48:18.527466   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:48:18.541725   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:48:18.557008   23379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:48:18.688718   23379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:48:18.852643   23379 docker.go:233] disabling docker service ...
	I1202 11:48:18.852707   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:48:18.868163   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:48:18.881925   23379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:48:19.017240   23379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:48:19.151423   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:48:19.165081   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:48:19.183322   23379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:48:19.183382   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.193996   23379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:48:19.194053   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.204159   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.214125   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.224009   23379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:48:19.234581   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.244825   23379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.261368   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.270942   23379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:48:19.279793   23379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:48:19.279828   23379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:48:19.292711   23379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:48:19.302043   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:48:19.426581   23379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:48:19.517813   23379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:48:19.517869   23379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:48:19.523046   23379 start.go:563] Will wait 60s for crictl version
	I1202 11:48:19.523100   23379 ssh_runner.go:195] Run: which crictl
	I1202 11:48:19.526693   23379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:48:19.569077   23379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:48:19.569154   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:48:19.606184   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:48:19.639221   23379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:48:19.640557   23379 out.go:177]   - env NO_PROXY=192.168.39.102
	I1202 11:48:19.641750   23379 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.96
	I1202 11:48:19.642878   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:19.645504   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:19.645963   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:19.645990   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:19.646180   23379 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:48:19.650508   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:48:19.664882   23379 mustload.go:65] Loading cluster: ha-604935
	I1202 11:48:19.665139   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:48:19.665497   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:48:19.665538   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:48:19.680437   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1202 11:48:19.680830   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:48:19.681262   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:48:19.681286   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:48:19.681575   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:48:19.681746   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:48:19.683191   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:48:19.683564   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:48:19.683606   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:48:19.697831   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I1202 11:48:19.698152   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:48:19.698542   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:48:19.698559   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:48:19.698845   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:48:19.699001   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:48:19.699166   23379 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.211
	I1202 11:48:19.699179   23379 certs.go:194] generating shared ca certs ...
	I1202 11:48:19.699197   23379 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:48:19.699318   23379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:48:19.699355   23379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:48:19.699364   23379 certs.go:256] generating profile certs ...
	I1202 11:48:19.699432   23379 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:48:19.699455   23379 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864
	I1202 11:48:19.699468   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.96 192.168.39.211 192.168.39.254]
	I1202 11:48:19.775540   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864 ...
	I1202 11:48:19.775561   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864: {Name:mk862a073739ee2a78cf9f81a3258f4be6a2f692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:48:19.775718   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864 ...
	I1202 11:48:19.775732   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864: {Name:mk2b946b8deaf42e144aacb0aeac107c1e5e5346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:48:19.775826   23379 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:48:19.775947   23379 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:48:19.776063   23379 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:48:19.776077   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:48:19.776089   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:48:19.776102   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:48:19.776114   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:48:19.776131   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:48:19.776145   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:48:19.776157   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:48:19.800328   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:48:19.800402   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:48:19.800434   23379 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:48:19.800443   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:48:19.800467   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:48:19.800488   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:48:19.800508   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:48:19.800550   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:48:19.800576   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:19.800589   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:48:19.800601   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:48:19.800629   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:48:19.803275   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:19.803700   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:48:19.803723   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:19.803908   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:48:19.804099   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:48:19.804214   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:48:19.804377   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:48:19.880485   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 11:48:19.886022   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 11:48:19.898728   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 11:48:19.903305   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 11:48:19.914871   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 11:48:19.919141   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 11:48:19.929566   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 11:48:19.933478   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1202 11:48:19.943613   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 11:48:19.948089   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 11:48:19.958895   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 11:48:19.964303   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 11:48:19.977617   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:48:20.002994   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:48:20.029806   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:48:20.053441   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:48:20.076846   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1202 11:48:20.100859   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 11:48:20.123816   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:48:20.147882   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:48:20.170789   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:48:20.194677   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:48:20.217677   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:48:20.242059   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 11:48:20.259613   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 11:48:20.277187   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 11:48:20.294496   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1202 11:48:20.311183   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 11:48:20.328629   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 11:48:20.347609   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 11:48:20.365780   23379 ssh_runner.go:195] Run: openssl version
	I1202 11:48:20.371782   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:48:20.383879   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:20.388524   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:20.388568   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:20.394674   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:48:20.407273   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:48:20.419450   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:48:20.424025   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:48:20.424067   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:48:20.429730   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:48:20.440110   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:48:20.451047   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:48:20.456468   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:48:20.456512   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:48:20.462924   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:48:20.474358   23379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:48:20.478447   23379 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:48:20.478499   23379 kubeadm.go:934] updating node {m03 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1202 11:48:20.478603   23379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:48:20.478639   23379 kube-vip.go:115] generating kube-vip config ...
	I1202 11:48:20.478678   23379 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:48:20.496205   23379 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:48:20.496274   23379 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:48:20.496312   23379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:48:20.507618   23379 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1202 11:48:20.507658   23379 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1202 11:48:20.517119   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1202 11:48:20.517130   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1202 11:48:20.517161   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:48:20.517164   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:48:20.517126   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1202 11:48:20.517219   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:48:20.517234   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:48:20.517303   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:48:20.534132   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:48:20.534202   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:48:20.534220   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1202 11:48:20.534247   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1202 11:48:20.534296   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1202 11:48:20.534330   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1202 11:48:20.553870   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1202 11:48:20.553896   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1202 11:48:21.369626   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 11:48:21.380201   23379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1202 11:48:21.397686   23379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:48:21.414134   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1202 11:48:21.430962   23379 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:48:21.434795   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:48:21.446707   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:48:21.575648   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:48:21.592190   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:48:21.592653   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:48:21.592702   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:48:21.607602   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I1202 11:48:21.608034   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:48:21.608505   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:48:21.608523   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:48:21.608871   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:48:21.609064   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:48:21.609215   23379 start.go:317] joinCluster: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:48:21.609330   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1202 11:48:21.609352   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:48:21.612246   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:21.612678   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:48:21.612705   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:21.612919   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:48:21.613101   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:48:21.613260   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:48:21.613431   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:48:21.802258   23379 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:48:21.802311   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oi1g5f.7vg9nzzhmrri7fzl --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443"
	I1202 11:48:44.058534   23379 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oi1g5f.7vg9nzzhmrri7fzl --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443": (22.25619815s)
	I1202 11:48:44.058574   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1202 11:48:44.589392   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-604935-m03 minikube.k8s.io/updated_at=2024_12_02T11_48_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=ha-604935 minikube.k8s.io/primary=false
	I1202 11:48:44.754182   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-604935-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1202 11:48:44.876509   23379 start.go:319] duration metric: took 23.267291972s to joinCluster
	I1202 11:48:44.876583   23379 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:48:44.876929   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:48:44.877896   23379 out.go:177] * Verifying Kubernetes components...
	I1202 11:48:44.879178   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:48:45.205771   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:48:45.227079   23379 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:48:45.227379   23379 kapi.go:59] client config for ha-604935: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 11:48:45.227437   23379 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1202 11:48:45.227646   23379 node_ready.go:35] waiting up to 6m0s for node "ha-604935-m03" to be "Ready" ...
	I1202 11:48:45.227731   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:45.227739   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:45.227750   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:45.227760   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:45.230602   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:45.728816   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:45.728844   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:45.728856   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:45.728862   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:45.732325   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:46.228808   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:46.228838   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:46.228847   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:46.228855   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:46.232971   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:48:46.728246   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:46.728266   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:46.728275   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:46.728278   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:46.731578   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:47.228275   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:47.228293   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:47.228302   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:47.228305   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:47.231235   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:47.231687   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:47.728543   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:47.728564   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:47.728575   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:47.728580   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:47.731725   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:48.228100   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:48.228126   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:48.228134   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:48.228139   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:48.231200   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:48.727927   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:48.727953   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:48.727965   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:48.727971   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:48.731841   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:49.228251   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:49.228277   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:49.228288   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:49.228295   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:49.231887   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:49.232816   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:49.728539   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:49.728558   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:49.728567   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:49.728578   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:49.731618   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:50.228164   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:50.228182   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:50.228190   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:50.228194   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:50.231677   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:50.728841   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:50.728865   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:50.728877   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:50.728884   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:50.731790   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:51.227844   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:51.227875   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:51.227882   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:51.227886   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:51.231092   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:51.728369   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:51.728389   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:51.728397   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:51.728402   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:51.731512   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:51.732161   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:52.228555   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:52.228577   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:52.228585   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:52.228590   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:52.232624   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:48:52.727915   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:52.727935   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:52.727942   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:52.727946   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:52.731213   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:53.228361   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:53.228382   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:53.228389   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:53.228392   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:53.233382   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:48:53.728248   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:53.728268   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:53.728276   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:53.728280   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:53.731032   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:54.228383   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:54.228402   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:54.228409   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:54.228414   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:54.231567   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:54.232182   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:54.728033   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:54.728054   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:54.728070   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:54.728078   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:54.731003   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:55.227931   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:55.227952   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:55.227959   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:55.227963   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:55.231124   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:55.728257   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:55.728282   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:55.728295   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:55.728302   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:55.731469   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:56.228616   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:56.228634   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:56.228642   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:56.228648   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:56.231749   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:56.232413   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:56.728627   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:56.728662   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:56.728672   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:56.728679   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:56.731199   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:57.228073   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:57.228095   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:57.228106   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:57.228112   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:57.231071   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:57.728355   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:57.728374   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:57.728386   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:57.728390   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:57.732053   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:58.228692   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:58.228716   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:58.228725   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:58.228731   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:58.231871   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:58.232534   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:58.727842   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:58.727867   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:58.727888   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:58.727893   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:58.730412   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:59.228495   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:59.228515   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:59.228522   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:59.228525   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:59.232497   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:59.728247   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:59.728264   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:59.728272   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:59.728275   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:59.731212   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.227900   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:00.227922   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.227929   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.227932   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.232057   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:00.233141   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:49:00.728080   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:00.728104   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.728116   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.728123   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.730928   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.731736   23379 node_ready.go:49] node "ha-604935-m03" has status "Ready":"True"
	I1202 11:49:00.731754   23379 node_ready.go:38] duration metric: took 15.50409308s for node "ha-604935-m03" to be "Ready" ...
	I1202 11:49:00.731762   23379 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:49:00.731812   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:00.731821   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.731828   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.731833   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.737119   23379 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:49:00.743811   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.743881   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5gcc2
	I1202 11:49:00.743889   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.743896   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.743900   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.746447   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.747270   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:00.747288   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.747298   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.747304   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.750173   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.750663   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.750685   23379 pod_ready.go:82] duration metric: took 6.851528ms for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.750697   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.750762   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-g48q9
	I1202 11:49:00.750773   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.750782   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.750787   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.753393   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.754225   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:00.754242   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.754253   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.754261   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.756959   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.757348   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.757363   23379 pod_ready.go:82] duration metric: took 6.658502ms for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.757372   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.757427   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935
	I1202 11:49:00.757438   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.757444   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.757449   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.759919   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.760524   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:00.760540   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.760551   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.760557   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.762639   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.763103   23379 pod_ready.go:93] pod "etcd-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.763117   23379 pod_ready.go:82] duration metric: took 5.738836ms for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.763130   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.763170   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:49:00.763178   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.763184   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.763187   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.765295   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.765840   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:00.765853   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.765859   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.765866   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.767856   23379 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:49:00.768294   23379 pod_ready.go:93] pod "etcd-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.768308   23379 pod_ready.go:82] duration metric: took 5.173078ms for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.768315   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.928568   23379 request.go:632] Waited for 160.204775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m03
	I1202 11:49:00.928622   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m03
	I1202 11:49:00.928630   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.928637   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.928644   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.931639   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:01.129121   23379 request.go:632] Waited for 196.362858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:01.129188   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:01.129194   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.129201   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.129206   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.132093   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:01.132639   23379 pod_ready.go:93] pod "etcd-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:01.132663   23379 pod_ready.go:82] duration metric: took 364.340751ms for pod "etcd-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.132685   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.328581   23379 request.go:632] Waited for 195.818618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935
	I1202 11:49:01.328640   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935
	I1202 11:49:01.328645   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.328651   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.328659   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.332129   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:01.528887   23379 request.go:632] Waited for 196.197458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:01.528960   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:01.528968   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.528983   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.528991   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.531764   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:01.532366   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:01.532385   23379 pod_ready.go:82] duration metric: took 399.689084ms for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.532395   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.729145   23379 request.go:632] Waited for 196.686289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:49:01.729214   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:49:01.729222   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.729232   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.729241   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.732550   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:01.928940   23379 request.go:632] Waited for 195.375728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:01.929027   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:01.929039   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.929049   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.929060   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.932849   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:01.933394   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:01.933415   23379 pod_ready.go:82] duration metric: took 401.013286ms for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.933428   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.128618   23379 request.go:632] Waited for 195.115216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m03
	I1202 11:49:02.128692   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m03
	I1202 11:49:02.128704   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.128714   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.128744   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.132085   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:02.328195   23379 request.go:632] Waited for 195.287157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:02.328272   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:02.328280   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.328290   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.328294   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.331350   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:02.332062   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:02.332086   23379 pod_ready.go:82] duration metric: took 398.648799ms for pod "kube-apiserver-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.332096   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.528402   23379 request.go:632] Waited for 196.237056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:49:02.528456   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:49:02.528461   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.528468   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.528471   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.531001   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:02.729030   23379 request.go:632] Waited for 197.344265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:02.729083   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:02.729088   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.729095   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.729101   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.733927   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:02.734415   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:02.734433   23379 pod_ready.go:82] duration metric: took 402.330362ms for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.734442   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.928547   23379 request.go:632] Waited for 194.020533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:49:02.928615   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:49:02.928624   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.928634   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.928644   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.933547   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:03.128827   23379 request.go:632] Waited for 194.344486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:03.128890   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:03.128895   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.128915   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.128921   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.133610   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:03.134316   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:03.134333   23379 pod_ready.go:82] duration metric: took 399.884969ms for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.134345   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.328421   23379 request.go:632] Waited for 194.000988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m03
	I1202 11:49:03.328488   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m03
	I1202 11:49:03.328493   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.328500   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.328505   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.331240   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:03.528448   23379 request.go:632] Waited for 196.353439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.528524   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.528532   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.528542   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.528554   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.532267   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:03.532704   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:03.532722   23379 pod_ready.go:82] duration metric: took 398.368333ms for pod "kube-controller-manager-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.532747   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rp7t2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.728896   23379 request.go:632] Waited for 196.080235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rp7t2
	I1202 11:49:03.728966   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rp7t2
	I1202 11:49:03.728972   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.728979   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.728982   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.732009   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:03.929024   23379 request.go:632] Waited for 196.282412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.929090   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.929096   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.929106   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.929111   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.932496   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:03.933154   23379 pod_ready.go:93] pod "kube-proxy-rp7t2" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:03.933174   23379 pod_ready.go:82] duration metric: took 400.416355ms for pod "kube-proxy-rp7t2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.933184   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.128132   23379 request.go:632] Waited for 194.87576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:49:04.128183   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:49:04.128188   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.128196   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.128200   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.131316   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:04.328392   23379 request.go:632] Waited for 196.344562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:04.328464   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:04.328472   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.328488   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.328504   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.331622   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:04.332330   23379 pod_ready.go:93] pod "kube-proxy-tqcb6" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:04.332349   23379 pod_ready.go:82] duration metric: took 399.158434ms for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.332362   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.528404   23379 request.go:632] Waited for 195.973025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:49:04.528476   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:49:04.528485   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.528499   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.528512   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.531287   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:04.728831   23379 request.go:632] Waited for 196.723103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:04.728880   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:04.728888   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.728918   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.728926   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.731917   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:04.732716   23379 pod_ready.go:93] pod "kube-proxy-w9r4x" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:04.732733   23379 pod_ready.go:82] duration metric: took 400.363929ms for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.732741   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.928126   23379 request.go:632] Waited for 195.328391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:49:04.928208   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:49:04.928219   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.928242   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.928251   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.931908   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.129033   23379 request.go:632] Waited for 196.165096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:05.129107   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:05.129114   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.129124   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.129131   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.132837   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.133502   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:05.133521   23379 pod_ready.go:82] duration metric: took 400.774358ms for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.133531   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.328705   23379 request.go:632] Waited for 195.110801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:49:05.328775   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:49:05.328782   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.328792   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.328804   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.332423   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.528425   23379 request.go:632] Waited for 195.360611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:05.528479   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:05.528484   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.528491   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.528494   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.531378   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:05.531939   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:05.531957   23379 pod_ready.go:82] duration metric: took 398.419577ms for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.531967   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.728987   23379 request.go:632] Waited for 196.947438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m03
	I1202 11:49:05.729040   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m03
	I1202 11:49:05.729045   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.729052   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.729056   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.732940   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.928937   23379 request.go:632] Waited for 195.348906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:05.928990   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:05.928996   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.929007   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.929023   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.932936   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.933995   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:05.934013   23379 pod_ready.go:82] duration metric: took 402.03942ms for pod "kube-scheduler-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.934028   23379 pod_ready.go:39] duration metric: took 5.202257007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:49:05.934044   23379 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:49:05.934111   23379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:49:05.950308   23379 api_server.go:72] duration metric: took 21.073692026s to wait for apiserver process to appear ...
	I1202 11:49:05.950330   23379 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:49:05.950350   23379 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1202 11:49:05.954392   23379 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1202 11:49:05.954463   23379 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1202 11:49:05.954472   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.954479   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.954484   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.955264   23379 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1202 11:49:05.955324   23379 api_server.go:141] control plane version: v1.31.2
	I1202 11:49:05.955340   23379 api_server.go:131] duration metric: took 5.002951ms to wait for apiserver health ...
	I1202 11:49:05.955348   23379 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:49:06.128765   23379 request.go:632] Waited for 173.340291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.128831   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.128854   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.128868   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.128878   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.134738   23379 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:49:06.141415   23379 system_pods.go:59] 24 kube-system pods found
	I1202 11:49:06.141437   23379 system_pods.go:61] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:49:06.141442   23379 system_pods.go:61] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:49:06.141446   23379 system_pods.go:61] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:49:06.141449   23379 system_pods.go:61] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:49:06.141453   23379 system_pods.go:61] "etcd-ha-604935-m03" [2de6c192-755f-43c7-a973-b1137b03c49f] Running
	I1202 11:49:06.141457   23379 system_pods.go:61] "kindnet-j4cr6" [07287f32-1272-4735-bb43-88f862b28657] Running
	I1202 11:49:06.141461   23379 system_pods.go:61] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:49:06.141464   23379 system_pods.go:61] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:49:06.141468   23379 system_pods.go:61] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:49:06.141471   23379 system_pods.go:61] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:49:06.141475   23379 system_pods.go:61] "kube-apiserver-ha-604935-m03" [74b078f5-560f-4077-be17-91f7add9545f] Running
	I1202 11:49:06.141479   23379 system_pods.go:61] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:49:06.141487   23379 system_pods.go:61] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:49:06.141494   23379 system_pods.go:61] "kube-controller-manager-ha-604935-m03" [445254dd-244a-4f40-9a0c-362bd03686c3] Running
	I1202 11:49:06.141507   23379 system_pods.go:61] "kube-proxy-rp7t2" [84b2dba2-d1be-49b6-addc-a9d919ef683e] Running
	I1202 11:49:06.141512   23379 system_pods.go:61] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:49:06.141517   23379 system_pods.go:61] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:49:06.141523   23379 system_pods.go:61] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:49:06.141527   23379 system_pods.go:61] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:49:06.141531   23379 system_pods.go:61] "kube-scheduler-ha-604935-m03" [45cc93ef-1da2-469b-a0de-8bc9b8383094] Running
	I1202 11:49:06.141534   23379 system_pods.go:61] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:49:06.141540   23379 system_pods.go:61] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:49:06.141543   23379 system_pods.go:61] "kube-vip-ha-604935-m03" [5c5c4e09-5ad1-4b08-8ea3-84260528b78e] Running
	I1202 11:49:06.141545   23379 system_pods.go:61] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:49:06.141551   23379 system_pods.go:74] duration metric: took 186.197102ms to wait for pod list to return data ...
	I1202 11:49:06.141560   23379 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:49:06.329008   23379 request.go:632] Waited for 187.367529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:49:06.329100   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:49:06.329113   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.329125   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.329130   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.332755   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:06.332967   23379 default_sa.go:45] found service account: "default"
	I1202 11:49:06.332983   23379 default_sa.go:55] duration metric: took 191.417488ms for default service account to be created ...
	I1202 11:49:06.332991   23379 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:49:06.528293   23379 request.go:632] Waited for 195.242273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.528366   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.528375   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.528382   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.528388   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.533257   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:06.539940   23379 system_pods.go:86] 24 kube-system pods found
	I1202 11:49:06.539965   23379 system_pods.go:89] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:49:06.539970   23379 system_pods.go:89] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:49:06.539976   23379 system_pods.go:89] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:49:06.539980   23379 system_pods.go:89] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:49:06.539983   23379 system_pods.go:89] "etcd-ha-604935-m03" [2de6c192-755f-43c7-a973-b1137b03c49f] Running
	I1202 11:49:06.539986   23379 system_pods.go:89] "kindnet-j4cr6" [07287f32-1272-4735-bb43-88f862b28657] Running
	I1202 11:49:06.539989   23379 system_pods.go:89] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:49:06.539995   23379 system_pods.go:89] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:49:06.539998   23379 system_pods.go:89] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:49:06.540002   23379 system_pods.go:89] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:49:06.540006   23379 system_pods.go:89] "kube-apiserver-ha-604935-m03" [74b078f5-560f-4077-be17-91f7add9545f] Running
	I1202 11:49:06.540009   23379 system_pods.go:89] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:49:06.540013   23379 system_pods.go:89] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:49:06.540016   23379 system_pods.go:89] "kube-controller-manager-ha-604935-m03" [445254dd-244a-4f40-9a0c-362bd03686c3] Running
	I1202 11:49:06.540020   23379 system_pods.go:89] "kube-proxy-rp7t2" [84b2dba2-d1be-49b6-addc-a9d919ef683e] Running
	I1202 11:49:06.540024   23379 system_pods.go:89] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:49:06.540028   23379 system_pods.go:89] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:49:06.540034   23379 system_pods.go:89] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:49:06.540037   23379 system_pods.go:89] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:49:06.540040   23379 system_pods.go:89] "kube-scheduler-ha-604935-m03" [45cc93ef-1da2-469b-a0de-8bc9b8383094] Running
	I1202 11:49:06.540043   23379 system_pods.go:89] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:49:06.540046   23379 system_pods.go:89] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:49:06.540049   23379 system_pods.go:89] "kube-vip-ha-604935-m03" [5c5c4e09-5ad1-4b08-8ea3-84260528b78e] Running
	I1202 11:49:06.540053   23379 system_pods.go:89] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:49:06.540058   23379 system_pods.go:126] duration metric: took 207.062281ms to wait for k8s-apps to be running ...
	I1202 11:49:06.540068   23379 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:49:06.540106   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:49:06.555319   23379 system_svc.go:56] duration metric: took 15.24289ms WaitForService to wait for kubelet
	I1202 11:49:06.555341   23379 kubeadm.go:582] duration metric: took 21.678727669s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:49:06.555356   23379 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:49:06.728222   23379 request.go:632] Waited for 172.787542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1202 11:49:06.728311   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1202 11:49:06.728317   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.728327   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.728332   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.731784   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:06.733040   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:49:06.733062   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:49:06.733074   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:49:06.733079   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:49:06.733084   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:49:06.733088   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:49:06.733094   23379 node_conditions.go:105] duration metric: took 177.727321ms to run NodePressure ...
	I1202 11:49:06.733107   23379 start.go:241] waiting for startup goroutines ...
	I1202 11:49:06.733138   23379 start.go:255] writing updated cluster config ...
	I1202 11:49:06.733452   23379 ssh_runner.go:195] Run: rm -f paused
	I1202 11:49:06.787558   23379 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 11:49:06.789249   23379 out.go:177] * Done! kubectl is now configured to use "ha-604935" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.390351100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2877041e-e019-45d6-a745-e5a705354ba2 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.391749244Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a38632f-c78f-4a8e-b176-e1d3956f6631 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.392367857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140364392341513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a38632f-c78f-4a8e-b176-e1d3956f6631 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.393323452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c953589b-ff17-4cb0-842e-4418206078ee name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.393392139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c953589b-ff17-4cb0-842e-4418206078ee name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.393660745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c953589b-ff17-4cb0-842e-4418206078ee name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.400272483Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cf31999-fbd1-44c5-a841-2876383cb4f5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.400892834Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-8jxc4,Uid:f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733140148279592187,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:49:07.666936685Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1023dda9-1199-4200-9b82-bb054a0eedff,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1733140013381225285,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-02T11:46:53.065981152Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-g48q9,Uid:66ce87a9-4918-45fd-9721-d4e6323b7b54,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733140013379375022,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:46:53.065488407Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-5gcc2,Uid:63fea190-8001-4264-a579-13a9cae6ddff,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1733140013372020076,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63fea190-8001-4264-a579-13a9cae6ddff,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:46:53.058488150Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&PodSandboxMetadata{Name:kindnet-k99r8,Uid:e5466844-1f48-46c2-8e34-c4bf016b9656,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139999159079477,Labels:map[string]string{app: kindnet,controller-revision-hash: 65ddb8b87b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:46:38.840314062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&PodSandboxMetadata{Name:kube-proxy-tqcb6,Uid:d576fbb5-bee1-4482-82f5-b21a5e1e65f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139999157955919,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:46:38.836053895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-604935,Uid:3795b7eb129e1555193fc4481f415c61,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1733139987835770182,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3795b7eb129e1555193fc4481f415c61,kubernetes.io/config.seen: 2024-12-02T11:46:27.334541833Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-604935,Uid:e34a31690bf4b94086a296305429f2bd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139987829372109,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{kubernetes.io/config.hash: e34a
31690bf4b94086a296305429f2bd,kubernetes.io/config.seen: 2024-12-02T11:46:27.334542605Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-604935,Uid:1298b086a2bd0a1c4a6a3d5c72224eab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139987825890188,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.102:8443,kubernetes.io/config.hash: 1298b086a2bd0a1c4a6a3d5c72224eab,kubernetes.io/config.seen: 2024-12-02T11:46:27.334538959Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Met
adata:&PodSandboxMetadata{Name:etcd-ha-604935,Uid:7e46709c5369afc1ad72a60c327e7e03,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139987807865871,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.102:2379,kubernetes.io/config.hash: 7e46709c5369afc1ad72a60c327e7e03,kubernetes.io/config.seen: 2024-12-02T11:46:27.334535639Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-604935,Uid:367ab693a9f84a18356ae64542b127be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139987806690295,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 367ab693a9f84a18356ae64542b127be,kubernetes.io/config.seen: 2024-12-02T11:46:27.334540819Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7cf31999-fbd1-44c5-a841-2876383cb4f5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.403119151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b0b688f-df61-4d98-aec2-3fb70b3886e3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.403212137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b0b688f-df61-4d98-aec2-3fb70b3886e3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.403466320Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b0b688f-df61-4d98-aec2-3fb70b3886e3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.433076987Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd33750f-efd5-4032-a8fd-f36ef86d461a name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.433155595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd33750f-efd5-4032-a8fd-f36ef86d461a name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.435285279Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7d78e3e-2a26-4c68-9157-af047bd3b767 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.435940826Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140364435920125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7d78e3e-2a26-4c68-9157-af047bd3b767 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.436773388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bdf3b40-25c1-4d2e-89d0-2364b86c31f0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.436839709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bdf3b40-25c1-4d2e-89d0-2364b86c31f0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.437100185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bdf3b40-25c1-4d2e-89d0-2364b86c31f0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.477960525Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a90b864a-e8ca-4036-bb80-a5347b46798a name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.478027179Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a90b864a-e8ca-4036-bb80-a5347b46798a name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.478825651Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=901909bb-dfb2-4b2e-879f-fc1edfa1c1c6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.479319647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140364479292127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=901909bb-dfb2-4b2e-879f-fc1edfa1c1c6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.479977772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aac3c34a-f4b4-4805-967a-ba954023ba86 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.480048601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aac3c34a-f4b4-4805-967a-ba954023ba86 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:44 ha-604935 crio[658]: time="2024-12-02 11:52:44.480269545Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aac3c34a-f4b4-4805-967a-ba954023ba86 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	27068dc5178bb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   1f0c13e663748       busybox-7dff88458-8jxc4
	be0c4adffd61b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   72cc1a04d8965       coredns-7c65d6cfc9-g48q9
	91c90e9d05cf7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   abbb2caf2ff00       coredns-7c65d6cfc9-5gcc2
	9d7d77b59569b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   40752b9892351       storage-provisioner
	579b11920d9fd       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   646eade60f2d2       kindnet-k99r8
	f6a700874f779       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   8ba57f92e62cd       kube-proxy-tqcb6
	17bfa0393f187       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   096eb67e8b05d       kube-vip-ha-604935
	275d716cfd4f7       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   8978121739b66       kube-controller-manager-ha-604935
	090e4a0254277       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   1989811c4f393       kube-scheduler-ha-604935
	53184ed95349a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   ec95830bfe24d       etcd-ha-604935
	9624bba327f9b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   fc4151eee5a3f       kube-apiserver-ha-604935
	
	
	==> coredns [91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f] <==
	[INFO] 10.244.0.4:39323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215731s
	[INFO] 10.244.0.4:33525 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162613s
	[INFO] 10.244.0.4:39123 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125815s
	[INFO] 10.244.0.4:37376 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000244786s
	[INFO] 10.244.2.2:44210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174232s
	[INFO] 10.244.2.2:54748 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001765833s
	[INFO] 10.244.2.2:60174 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000284786s
	[INFO] 10.244.2.2:50584 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109022s
	[INFO] 10.244.2.2:34854 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001186229s
	[INFO] 10.244.2.2:42659 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081441s
	[INFO] 10.244.2.2:51018 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119851s
	[INFO] 10.244.1.2:51189 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001371264s
	[INFO] 10.244.1.2:57162 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158703s
	[INFO] 10.244.0.4:59693 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068002s
	[INFO] 10.244.0.4:51163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042042s
	[INFO] 10.244.2.2:40625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117188s
	[INFO] 10.244.1.2:49002 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091339s
	[INFO] 10.244.1.2:42507 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192925s
	[INFO] 10.244.0.4:36452 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215238s
	[INFO] 10.244.0.4:41389 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010969s
	[INFO] 10.244.2.2:55194 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000180309s
	[INFO] 10.244.2.2:45875 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109142s
	[INFO] 10.244.1.2:42301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164839s
	[INFO] 10.244.1.2:47133 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176562s
	[INFO] 10.244.1.2:42848 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122646s
	
	
	==> coredns [be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818] <==
	[INFO] 10.244.1.2:33047 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000108391s
	[INFO] 10.244.1.2:40927 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001980013s
	[INFO] 10.244.0.4:37566 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004168289s
	[INFO] 10.244.0.4:36737 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252503s
	[INFO] 10.244.0.4:33046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003375406s
	[INFO] 10.244.0.4:42598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177128s
	[INFO] 10.244.2.2:46358 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148802s
	[INFO] 10.244.1.2:55837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194128s
	[INFO] 10.244.1.2:55278 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002096061s
	[INFO] 10.244.1.2:45640 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141771s
	[INFO] 10.244.1.2:36834 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000204172s
	[INFO] 10.244.1.2:41503 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00026722s
	[INFO] 10.244.1.2:46043 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001413s
	[INFO] 10.244.0.4:37544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011909s
	[INFO] 10.244.0.4:58597 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007644s
	[INFO] 10.244.2.2:41510 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179912s
	[INFO] 10.244.2.2:41733 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013607s
	[INFO] 10.244.2.2:57759 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000205972s
	[INFO] 10.244.1.2:54620 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248357s
	[INFO] 10.244.1.2:40630 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109148s
	[INFO] 10.244.0.4:39309 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113844s
	[INFO] 10.244.0.4:42691 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170784s
	[INFO] 10.244.2.2:41138 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112783s
	[INFO] 10.244.2.2:32778 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073017s
	[INFO] 10.244.1.2:42298 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018329s
	
	
	==> describe nodes <==
	Name:               ha-604935
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T11_46_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:46:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:52:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-604935
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4653179aa8d04165a06718969a078842
	  System UUID:                4653179a-a8d0-4165-a067-18969a078842
	  Boot ID:                    059fb5e8-3774-458b-bfbf-8364817017d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8jxc4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 coredns-7c65d6cfc9-5gcc2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m6s
	  kube-system                 coredns-7c65d6cfc9-g48q9             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m6s
	  kube-system                 etcd-ha-604935                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m10s
	  kube-system                 kindnet-k99r8                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m6s
	  kube-system                 kube-apiserver-ha-604935             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-controller-manager-ha-604935    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-proxy-tqcb6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-scheduler-ha-604935             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-vip-ha-604935                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m4s                   kube-proxy       
	  Normal  Starting                 6m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m10s (x2 over 6m10s)  kubelet          Node ha-604935 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s (x2 over 6m10s)  kubelet          Node ha-604935 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s (x2 over 6m10s)  kubelet          Node ha-604935 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m7s                   node-controller  Node ha-604935 event: Registered Node ha-604935 in Controller
	  Normal  NodeReady                5m51s                  kubelet          Node ha-604935 status is now: NodeReady
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-604935 event: Registered Node ha-604935 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-604935 event: Registered Node ha-604935 in Controller
	
	
	Name:               ha-604935-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_47_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:47:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:50:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    ha-604935-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f21093f5748416fa30ea8181c31a3f7
	  System UUID:                0f21093f-5748-416f-a30e-a8181c31a3f7
	  Boot ID:                    5621b6a5-bb1a-408d-b692-10c4aad4b418
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xbb9t                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-604935-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m12s
	  kube-system                 kindnet-l55rq                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m14s
	  kube-system                 kube-apiserver-ha-604935-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-controller-manager-ha-604935-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-proxy-w9r4x                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-scheduler-ha-604935-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-vip-ha-604935-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node ha-604935-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node ha-604935-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s (x7 over 5m14s)  kubelet          Node ha-604935-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-604935-m02 event: Registered Node ha-604935-m02 in Controller
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-604935-m02 event: Registered Node ha-604935-m02 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-604935-m02 event: Registered Node ha-604935-m02 in Controller
	  Normal  NodeNotReady             99s                    node-controller  Node ha-604935-m02 status is now: NodeNotReady
	
	
	Name:               ha-604935-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_48_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:48:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:52:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:48:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:48:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:48:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:49:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    ha-604935-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8588450b38914bf3ac287b253d72fb4d
	  System UUID:                8588450b-3891-4bf3-ac28-7b253d72fb4d
	  Boot ID:                    735a98f4-21e5-4433-a99b-76bab3cbd392
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l5kq7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-604935-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m2s
	  kube-system                 kindnet-j4cr6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m4s
	  kube-system                 kube-apiserver-ha-604935-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-controller-manager-ha-604935-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 kube-proxy-rp7t2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-ha-604935-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m58s
	  kube-system                 kube-vip-ha-604935-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node ha-604935-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node ha-604935-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node ha-604935-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-604935-m03 event: Registered Node ha-604935-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-604935-m03 event: Registered Node ha-604935-m03 in Controller
	  Normal  RegisteredNode           3m54s                node-controller  Node ha-604935-m03 event: Registered Node ha-604935-m03 in Controller
	
	
	Name:               ha-604935-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_49_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:49:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:52:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:49:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:49:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:49:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:50:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    ha-604935-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 577fefe5032840e68ccf6ba2b6fbcf44
	  System UUID:                577fefe5-0328-40e6-8ccf-6ba2b6fbcf44
	  Boot ID:                    5f3dbc6d-6884-49f4-acef-8235bb29f467
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwxsc       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m59s
	  kube-system                 kube-proxy-v649d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m54s            kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-604935-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-604935-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-604935-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     2m59s            cidrAllocator    Node ha-604935-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           2m59s            node-controller  Node ha-604935-m04 event: Registered Node ha-604935-m04 in Controller
	  Normal  RegisteredNode           2m57s            node-controller  Node ha-604935-m04 event: Registered Node ha-604935-m04 in Controller
	  Normal  RegisteredNode           2m55s            node-controller  Node ha-604935-m04 event: Registered Node ha-604935-m04 in Controller
	  Normal  NodeReady                2m40s            kubelet          Node ha-604935-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 2 11:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051551] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040036] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec 2 11:46] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.564296] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.579239] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.318373] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.060168] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057883] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.148672] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.135107] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.277991] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.959381] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +4.016173] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.058991] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.327237] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.069565] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.092272] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.163087] kauditd_printk_skb: 38 callbacks suppressed
	[Dec 2 11:47] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46] <==
	{"level":"warn","ts":"2024-12-02T11:52:44.568530Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.668636Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.671873Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.746207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.757195Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.761630Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.768570Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.776164Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.784061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.791042Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.794223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.797521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.804987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.810525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.815970Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.819831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.822781Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.830696Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.836179Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.841153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.843886Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.846820Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.850761Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.919589Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:44.921576Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:52:44 up 6 min,  0 users,  load average: 0.84, 0.45, 0.19
	Linux ha-604935 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10] <==
	I1202 11:52:12.903386       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	I1202 11:52:22.909600       1 main.go:297] Handling node with IPs: map[192.168.39.96:{}]
	I1202 11:52:22.909707       1 main.go:324] Node ha-604935-m02 has CIDR [10.244.1.0/24] 
	I1202 11:52:22.910146       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1202 11:52:22.910209       1 main.go:324] Node ha-604935-m03 has CIDR [10.244.2.0/24] 
	I1202 11:52:22.910752       1 main.go:297] Handling node with IPs: map[192.168.39.26:{}]
	I1202 11:52:22.910826       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	I1202 11:52:22.911166       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1202 11:52:22.911211       1 main.go:301] handling current node
	I1202 11:52:32.901182       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1202 11:52:32.901286       1 main.go:301] handling current node
	I1202 11:52:32.901341       1 main.go:297] Handling node with IPs: map[192.168.39.96:{}]
	I1202 11:52:32.901493       1 main.go:324] Node ha-604935-m02 has CIDR [10.244.1.0/24] 
	I1202 11:52:32.901812       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1202 11:52:32.901855       1 main.go:324] Node ha-604935-m03 has CIDR [10.244.2.0/24] 
	I1202 11:52:32.902073       1 main.go:297] Handling node with IPs: map[192.168.39.26:{}]
	I1202 11:52:32.903249       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	I1202 11:52:42.901238       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1202 11:52:42.901327       1 main.go:301] handling current node
	I1202 11:52:42.901361       1 main.go:297] Handling node with IPs: map[192.168.39.96:{}]
	I1202 11:52:42.901380       1 main.go:324] Node ha-604935-m02 has CIDR [10.244.1.0/24] 
	I1202 11:52:42.901720       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1202 11:52:42.901758       1 main.go:324] Node ha-604935-m03 has CIDR [10.244.2.0/24] 
	I1202 11:52:42.903817       1 main.go:297] Handling node with IPs: map[192.168.39.26:{}]
	I1202 11:52:42.903856       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6] <==
	I1202 11:46:32.842650       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 11:46:32.848385       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102]
	I1202 11:46:32.849164       1 controller.go:615] quota admission added evaluator for: endpoints
	I1202 11:46:32.859606       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 11:46:33.159098       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1202 11:46:34.294370       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1202 11:46:34.315176       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	http2: server: error reading preface from client 192.168.39.254:47786: read tcp 192.168.39.254:8443->192.168.39.254:47786: read: connection reset by peer
	I1202 11:46:34.492102       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1202 11:46:38.758671       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1202 11:46:38.805955       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1202 11:49:11.846753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54452: use of closed network connection
	E1202 11:49:12.028104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54460: use of closed network connection
	E1202 11:49:12.199806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54474: use of closed network connection
	E1202 11:49:12.392612       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54484: use of closed network connection
	E1202 11:49:12.562047       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54506: use of closed network connection
	E1202 11:49:12.747509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54530: use of closed network connection
	E1202 11:49:12.939816       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54544: use of closed network connection
	E1202 11:49:13.121199       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54562: use of closed network connection
	E1202 11:49:13.295085       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54584: use of closed network connection
	E1202 11:49:13.578607       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54612: use of closed network connection
	E1202 11:49:13.757972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54638: use of closed network connection
	E1202 11:49:14.099757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54676: use of closed network connection
	E1202 11:49:14.269710       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54694: use of closed network connection
	E1202 11:49:14.441652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54710: use of closed network connection
	
	
	==> kube-controller-manager [275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41] <==
	I1202 11:49:45.139269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.144540       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.233566       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.349805       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.679160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:47.939032       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-604935-m04"
	I1202 11:49:47.939241       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:47.969287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:49.605926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:49.681129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:55.357132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:04.214872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:04.215953       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-604935-m04"
	I1202 11:50:04.236833       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:04.619357       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:15.555711       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:51:05.313473       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	I1202 11:51:05.313596       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-604935-m04"
	I1202 11:51:05.338955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	I1202 11:51:05.387666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.010033ms"
	I1202 11:51:05.388828       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.832µs"
	I1202 11:51:05.441675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.06791ms"
	I1202 11:51:05.442993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.629µs"
	I1202 11:51:07.990253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	I1202 11:51:10.625653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	
	
	==> kube-proxy [f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1202 11:46:39.991996       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1202 11:46:40.020254       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	E1202 11:46:40.020650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 11:46:40.086409       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1202 11:46:40.086557       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 11:46:40.086602       1 server_linux.go:169] "Using iptables Proxier"
	I1202 11:46:40.089997       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 11:46:40.090696       1 server.go:483] "Version info" version="v1.31.2"
	I1202 11:46:40.090739       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 11:46:40.096206       1 config.go:199] "Starting service config controller"
	I1202 11:46:40.096522       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 11:46:40.096732       1 config.go:105] "Starting endpoint slice config controller"
	I1202 11:46:40.096763       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 11:46:40.098314       1 config.go:328] "Starting node config controller"
	I1202 11:46:40.099010       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 11:46:40.196939       1 shared_informer.go:320] Caches are synced for service config
	I1202 11:46:40.197006       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 11:46:40.199281       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35] <==
	W1202 11:46:32.142852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1202 11:46:32.142937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.153652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 11:46:32.153702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.221641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 11:46:32.221961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.358170       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1202 11:46:32.358291       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1202 11:46:32.429924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 11:46:32.430007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.430758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 11:46:32.430825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.449596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 11:46:32.449697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.505859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1202 11:46:32.505943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1202 11:46:34.815786       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1202 11:49:07.673886       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xbb9t\": pod busybox-7dff88458-xbb9t is already assigned to node \"ha-604935-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-xbb9t" node="ha-604935-m02"
	E1202 11:49:07.674510       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fc236bbd-f34b-454f-a66d-b369cd19cf9d(default/busybox-7dff88458-xbb9t) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-xbb9t"
	E1202 11:49:07.674758       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8jxc4\": pod busybox-7dff88458-8jxc4 is already assigned to node \"ha-604935\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8jxc4" node="ha-604935"
	E1202 11:49:07.675368       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb(default/busybox-7dff88458-8jxc4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8jxc4"
	E1202 11:49:07.675694       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8jxc4\": pod busybox-7dff88458-8jxc4 is already assigned to node \"ha-604935\"" pod="default/busybox-7dff88458-8jxc4"
	I1202 11:49:07.676018       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8jxc4" node="ha-604935"
	E1202 11:49:07.678080       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xbb9t\": pod busybox-7dff88458-xbb9t is already assigned to node \"ha-604935-m02\"" pod="default/busybox-7dff88458-xbb9t"
	I1202 11:49:07.679000       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-xbb9t" node="ha-604935-m02"
	
	
	==> kubelet <==
	Dec 02 11:51:34 ha-604935 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 11:51:34 ha-604935 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 11:51:34 ha-604935 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 11:51:34 ha-604935 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 11:51:34 ha-604935 kubelet[1316]: E1202 11:51:34.518783    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140294518371858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:34 ha-604935 kubelet[1316]: E1202 11:51:34.518905    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140294518371858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:44 ha-604935 kubelet[1316]: E1202 11:51:44.520250    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140304520009698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:44 ha-604935 kubelet[1316]: E1202 11:51:44.520275    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140304520009698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:54 ha-604935 kubelet[1316]: E1202 11:51:54.524305    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140314523474300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:54 ha-604935 kubelet[1316]: E1202 11:51:54.524384    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140314523474300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:04 ha-604935 kubelet[1316]: E1202 11:52:04.526662    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140324526379785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:04 ha-604935 kubelet[1316]: E1202 11:52:04.526711    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140324526379785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:14 ha-604935 kubelet[1316]: E1202 11:52:14.527977    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140334527643926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:14 ha-604935 kubelet[1316]: E1202 11:52:14.528325    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140334527643926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:24 ha-604935 kubelet[1316]: E1202 11:52:24.530019    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140344529552485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:24 ha-604935 kubelet[1316]: E1202 11:52:24.530407    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140344529552485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:34 ha-604935 kubelet[1316]: E1202 11:52:34.436289    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 02 11:52:34 ha-604935 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 11:52:34 ha-604935 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 11:52:34 ha-604935 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 11:52:34 ha-604935 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 11:52:34 ha-604935 kubelet[1316]: E1202 11:52:34.531571    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140354531272131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:34 ha-604935 kubelet[1316]: E1202 11:52:34.531618    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140354531272131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:44 ha-604935 kubelet[1316]: E1202 11:52:44.532768    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140364532554842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:44 ha-604935 kubelet[1316]: E1202 11:52:44.532808    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140364532554842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-604935 -n ha-604935
helpers_test.go:261: (dbg) Run:  kubectl --context ha-604935 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.380361691s)
ha_test.go:415: expected profile "ha-604935" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-604935\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-604935\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-604935\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.102\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.96\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.211\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.26\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevir
t\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\"
,\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-604935 -n ha-604935
E1202 11:52:49.238487   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-604935 logs -n 25: (1.334832775s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1292724683/001/cp-test_ha-604935-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935:/home/docker/cp-test_ha-604935-m03_ha-604935.txt                       |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935 sudo cat                                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935.txt                                 |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m02:/home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m02 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04:/home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m04 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp testdata/cp-test.txt                                                | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1292724683/001/cp-test_ha-604935-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935:/home/docker/cp-test_ha-604935-m04_ha-604935.txt                       |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935 sudo cat                                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935.txt                                 |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m02:/home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m02 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03:/home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m03 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-604935 node stop m02 -v=7                                                     | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:45:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:45:51.477333   23379 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:45:51.477429   23379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:51.477436   23379 out.go:358] Setting ErrFile to fd 2...
	I1202 11:45:51.477440   23379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:51.477579   23379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:45:51.478080   23379 out.go:352] Setting JSON to false
	I1202 11:45:51.478853   23379 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1703,"bootTime":1733138248,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:45:51.478907   23379 start.go:139] virtualization: kvm guest
	I1202 11:45:51.480873   23379 out.go:177] * [ha-604935] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:45:51.482060   23379 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:45:51.482068   23379 notify.go:220] Checking for updates...
	I1202 11:45:51.484245   23379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:45:51.485502   23379 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:45:51.486630   23379 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:51.487842   23379 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:45:51.488928   23379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:45:51.490194   23379 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:45:51.523210   23379 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 11:45:51.524197   23379 start.go:297] selected driver: kvm2
	I1202 11:45:51.524207   23379 start.go:901] validating driver "kvm2" against <nil>
	I1202 11:45:51.524217   23379 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:45:51.524886   23379 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:45:51.524953   23379 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 11:45:51.538752   23379 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 11:45:51.538805   23379 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 11:45:51.539057   23379 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:45:51.539096   23379 cni.go:84] Creating CNI manager for ""
	I1202 11:45:51.539154   23379 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1202 11:45:51.539162   23379 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 11:45:51.539222   23379 start.go:340] cluster config:
	{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1202 11:45:51.539330   23379 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:45:51.540849   23379 out.go:177] * Starting "ha-604935" primary control-plane node in "ha-604935" cluster
	I1202 11:45:51.542035   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:45:51.542064   23379 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:45:51.542073   23379 cache.go:56] Caching tarball of preloaded images
	I1202 11:45:51.542155   23379 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:45:51.542168   23379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:45:51.542474   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:45:51.542495   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json: {Name:mkd56e76e09e18927ad08e110fcb7c73441ee1fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:45:51.542653   23379 start.go:360] acquireMachinesLock for ha-604935: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:45:51.542690   23379 start.go:364] duration metric: took 21.87µs to acquireMachinesLock for "ha-604935"
	I1202 11:45:51.542712   23379 start.go:93] Provisioning new machine with config: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:45:51.542769   23379 start.go:125] createHost starting for "" (driver="kvm2")
	I1202 11:45:51.544215   23379 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 11:45:51.544376   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:45:51.544410   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:45:51.558068   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
	I1202 11:45:51.558542   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:45:51.559117   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:45:51.559144   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:45:51.559441   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:45:51.559624   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:45:51.559747   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:45:51.559887   23379 start.go:159] libmachine.API.Create for "ha-604935" (driver="kvm2")
	I1202 11:45:51.559913   23379 client.go:168] LocalClient.Create starting
	I1202 11:45:51.559938   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:45:51.559978   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:45:51.559999   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:45:51.560059   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:45:51.560086   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:45:51.560103   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:45:51.560134   23379 main.go:141] libmachine: Running pre-create checks...
	I1202 11:45:51.560147   23379 main.go:141] libmachine: (ha-604935) Calling .PreCreateCheck
	I1202 11:45:51.560467   23379 main.go:141] libmachine: (ha-604935) Calling .GetConfigRaw
	I1202 11:45:51.560846   23379 main.go:141] libmachine: Creating machine...
	I1202 11:45:51.560861   23379 main.go:141] libmachine: (ha-604935) Calling .Create
	I1202 11:45:51.560982   23379 main.go:141] libmachine: (ha-604935) Creating KVM machine...
	I1202 11:45:51.562114   23379 main.go:141] libmachine: (ha-604935) DBG | found existing default KVM network
	I1202 11:45:51.562698   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:51.562571   23402 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002231e0}
	I1202 11:45:51.562725   23379 main.go:141] libmachine: (ha-604935) DBG | created network xml: 
	I1202 11:45:51.562738   23379 main.go:141] libmachine: (ha-604935) DBG | <network>
	I1202 11:45:51.562750   23379 main.go:141] libmachine: (ha-604935) DBG |   <name>mk-ha-604935</name>
	I1202 11:45:51.562762   23379 main.go:141] libmachine: (ha-604935) DBG |   <dns enable='no'/>
	I1202 11:45:51.562773   23379 main.go:141] libmachine: (ha-604935) DBG |   
	I1202 11:45:51.562781   23379 main.go:141] libmachine: (ha-604935) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1202 11:45:51.562793   23379 main.go:141] libmachine: (ha-604935) DBG |     <dhcp>
	I1202 11:45:51.562803   23379 main.go:141] libmachine: (ha-604935) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1202 11:45:51.562814   23379 main.go:141] libmachine: (ha-604935) DBG |     </dhcp>
	I1202 11:45:51.562827   23379 main.go:141] libmachine: (ha-604935) DBG |   </ip>
	I1202 11:45:51.562839   23379 main.go:141] libmachine: (ha-604935) DBG |   
	I1202 11:45:51.562849   23379 main.go:141] libmachine: (ha-604935) DBG | </network>
	I1202 11:45:51.562861   23379 main.go:141] libmachine: (ha-604935) DBG | 
	I1202 11:45:51.567359   23379 main.go:141] libmachine: (ha-604935) DBG | trying to create private KVM network mk-ha-604935 192.168.39.0/24...
	I1202 11:45:51.627851   23379 main.go:141] libmachine: (ha-604935) DBG | private KVM network mk-ha-604935 192.168.39.0/24 created
	I1202 11:45:51.627878   23379 main.go:141] libmachine: (ha-604935) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935 ...
	I1202 11:45:51.627909   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:51.627845   23402 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:51.627936   23379 main.go:141] libmachine: (ha-604935) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:45:51.627956   23379 main.go:141] libmachine: (ha-604935) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:45:51.873906   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:51.873783   23402 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa...
	I1202 11:45:52.258389   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:52.258298   23402 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/ha-604935.rawdisk...
	I1202 11:45:52.258412   23379 main.go:141] libmachine: (ha-604935) DBG | Writing magic tar header
	I1202 11:45:52.258421   23379 main.go:141] libmachine: (ha-604935) DBG | Writing SSH key tar header
	I1202 11:45:52.258433   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:52.258404   23402 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935 ...
	I1202 11:45:52.258549   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935
	I1202 11:45:52.258587   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:45:52.258600   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935 (perms=drwx------)
	I1202 11:45:52.258612   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:45:52.258622   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:45:52.258639   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:45:52.258670   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:45:52.258686   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:52.258699   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:45:52.258711   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:45:52.258726   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:45:52.258742   23379 main.go:141] libmachine: (ha-604935) Creating domain...
	I1202 11:45:52.258748   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:45:52.258755   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home
	I1202 11:45:52.258760   23379 main.go:141] libmachine: (ha-604935) DBG | Skipping /home - not owner
	I1202 11:45:52.259679   23379 main.go:141] libmachine: (ha-604935) define libvirt domain using xml: 
	I1202 11:45:52.259691   23379 main.go:141] libmachine: (ha-604935) <domain type='kvm'>
	I1202 11:45:52.259699   23379 main.go:141] libmachine: (ha-604935)   <name>ha-604935</name>
	I1202 11:45:52.259718   23379 main.go:141] libmachine: (ha-604935)   <memory unit='MiB'>2200</memory>
	I1202 11:45:52.259726   23379 main.go:141] libmachine: (ha-604935)   <vcpu>2</vcpu>
	I1202 11:45:52.259737   23379 main.go:141] libmachine: (ha-604935)   <features>
	I1202 11:45:52.259745   23379 main.go:141] libmachine: (ha-604935)     <acpi/>
	I1202 11:45:52.259755   23379 main.go:141] libmachine: (ha-604935)     <apic/>
	I1202 11:45:52.259762   23379 main.go:141] libmachine: (ha-604935)     <pae/>
	I1202 11:45:52.259776   23379 main.go:141] libmachine: (ha-604935)     
	I1202 11:45:52.259792   23379 main.go:141] libmachine: (ha-604935)   </features>
	I1202 11:45:52.259808   23379 main.go:141] libmachine: (ha-604935)   <cpu mode='host-passthrough'>
	I1202 11:45:52.259826   23379 main.go:141] libmachine: (ha-604935)   
	I1202 11:45:52.259835   23379 main.go:141] libmachine: (ha-604935)   </cpu>
	I1202 11:45:52.259843   23379 main.go:141] libmachine: (ha-604935)   <os>
	I1202 11:45:52.259851   23379 main.go:141] libmachine: (ha-604935)     <type>hvm</type>
	I1202 11:45:52.259863   23379 main.go:141] libmachine: (ha-604935)     <boot dev='cdrom'/>
	I1202 11:45:52.259871   23379 main.go:141] libmachine: (ha-604935)     <boot dev='hd'/>
	I1202 11:45:52.259896   23379 main.go:141] libmachine: (ha-604935)     <bootmenu enable='no'/>
	I1202 11:45:52.259912   23379 main.go:141] libmachine: (ha-604935)   </os>
	I1202 11:45:52.259917   23379 main.go:141] libmachine: (ha-604935)   <devices>
	I1202 11:45:52.259925   23379 main.go:141] libmachine: (ha-604935)     <disk type='file' device='cdrom'>
	I1202 11:45:52.259935   23379 main.go:141] libmachine: (ha-604935)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/boot2docker.iso'/>
	I1202 11:45:52.259939   23379 main.go:141] libmachine: (ha-604935)       <target dev='hdc' bus='scsi'/>
	I1202 11:45:52.259944   23379 main.go:141] libmachine: (ha-604935)       <readonly/>
	I1202 11:45:52.259951   23379 main.go:141] libmachine: (ha-604935)     </disk>
	I1202 11:45:52.259956   23379 main.go:141] libmachine: (ha-604935)     <disk type='file' device='disk'>
	I1202 11:45:52.259963   23379 main.go:141] libmachine: (ha-604935)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:45:52.259970   23379 main.go:141] libmachine: (ha-604935)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/ha-604935.rawdisk'/>
	I1202 11:45:52.259978   23379 main.go:141] libmachine: (ha-604935)       <target dev='hda' bus='virtio'/>
	I1202 11:45:52.259982   23379 main.go:141] libmachine: (ha-604935)     </disk>
	I1202 11:45:52.259992   23379 main.go:141] libmachine: (ha-604935)     <interface type='network'>
	I1202 11:45:52.260000   23379 main.go:141] libmachine: (ha-604935)       <source network='mk-ha-604935'/>
	I1202 11:45:52.260004   23379 main.go:141] libmachine: (ha-604935)       <model type='virtio'/>
	I1202 11:45:52.260011   23379 main.go:141] libmachine: (ha-604935)     </interface>
	I1202 11:45:52.260015   23379 main.go:141] libmachine: (ha-604935)     <interface type='network'>
	I1202 11:45:52.260020   23379 main.go:141] libmachine: (ha-604935)       <source network='default'/>
	I1202 11:45:52.260026   23379 main.go:141] libmachine: (ha-604935)       <model type='virtio'/>
	I1202 11:45:52.260031   23379 main.go:141] libmachine: (ha-604935)     </interface>
	I1202 11:45:52.260035   23379 main.go:141] libmachine: (ha-604935)     <serial type='pty'>
	I1202 11:45:52.260040   23379 main.go:141] libmachine: (ha-604935)       <target port='0'/>
	I1202 11:45:52.260045   23379 main.go:141] libmachine: (ha-604935)     </serial>
	I1202 11:45:52.260050   23379 main.go:141] libmachine: (ha-604935)     <console type='pty'>
	I1202 11:45:52.260059   23379 main.go:141] libmachine: (ha-604935)       <target type='serial' port='0'/>
	I1202 11:45:52.260081   23379 main.go:141] libmachine: (ha-604935)     </console>
	I1202 11:45:52.260097   23379 main.go:141] libmachine: (ha-604935)     <rng model='virtio'>
	I1202 11:45:52.260105   23379 main.go:141] libmachine: (ha-604935)       <backend model='random'>/dev/random</backend>
	I1202 11:45:52.260113   23379 main.go:141] libmachine: (ha-604935)     </rng>
	I1202 11:45:52.260119   23379 main.go:141] libmachine: (ha-604935)     
	I1202 11:45:52.260131   23379 main.go:141] libmachine: (ha-604935)     
	I1202 11:45:52.260139   23379 main.go:141] libmachine: (ha-604935)   </devices>
	I1202 11:45:52.260142   23379 main.go:141] libmachine: (ha-604935) </domain>
	I1202 11:45:52.260148   23379 main.go:141] libmachine: (ha-604935) 
	I1202 11:45:52.264453   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e2:c6:db in network default
	I1202 11:45:52.264963   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:52.264976   23379 main.go:141] libmachine: (ha-604935) Ensuring networks are active...
	I1202 11:45:52.265536   23379 main.go:141] libmachine: (ha-604935) Ensuring network default is active
	I1202 11:45:52.265809   23379 main.go:141] libmachine: (ha-604935) Ensuring network mk-ha-604935 is active
	I1202 11:45:52.266301   23379 main.go:141] libmachine: (ha-604935) Getting domain xml...
	I1202 11:45:52.266972   23379 main.go:141] libmachine: (ha-604935) Creating domain...
	I1202 11:45:53.425942   23379 main.go:141] libmachine: (ha-604935) Waiting to get IP...
	I1202 11:45:53.426812   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:53.427160   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:53.427221   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:53.427145   23402 retry.go:31] will retry after 201.077519ms: waiting for machine to come up
	I1202 11:45:53.629564   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:53.629950   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:53.629976   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:53.629910   23402 retry.go:31] will retry after 339.273732ms: waiting for machine to come up
	I1202 11:45:53.970328   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:53.970740   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:53.970764   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:53.970705   23402 retry.go:31] will retry after 350.772564ms: waiting for machine to come up
	I1202 11:45:54.323244   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:54.323628   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:54.323652   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:54.323595   23402 retry.go:31] will retry after 510.154735ms: waiting for machine to come up
	I1202 11:45:54.834818   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:54.835184   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:54.835211   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:54.835141   23402 retry.go:31] will retry after 497.813223ms: waiting for machine to come up
	I1202 11:45:55.334326   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:55.334697   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:55.334728   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:55.334631   23402 retry.go:31] will retry after 593.538742ms: waiting for machine to come up
	I1202 11:45:55.929133   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:55.929547   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:55.929575   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:55.929508   23402 retry.go:31] will retry after 1.005519689s: waiting for machine to come up
	I1202 11:45:56.936100   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:56.936549   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:56.936581   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:56.936492   23402 retry.go:31] will retry after 1.273475187s: waiting for machine to come up
	I1202 11:45:58.211849   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:58.212240   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:58.212280   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:58.212213   23402 retry.go:31] will retry after 1.292529083s: waiting for machine to come up
	I1202 11:45:59.506572   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:59.506909   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:59.506934   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:59.506880   23402 retry.go:31] will retry after 1.800735236s: waiting for machine to come up
	I1202 11:46:01.309936   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:01.310447   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:01.310467   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:01.310416   23402 retry.go:31] will retry after 2.83980414s: waiting for machine to come up
	I1202 11:46:04.153261   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:04.153728   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:04.153748   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:04.153704   23402 retry.go:31] will retry after 2.497515599s: waiting for machine to come up
	I1202 11:46:06.652765   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:06.653095   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:06.653119   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:06.653068   23402 retry.go:31] will retry after 2.762441656s: waiting for machine to come up
	I1202 11:46:09.418859   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:09.419194   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:09.419220   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:09.419149   23402 retry.go:31] will retry after 3.896839408s: waiting for machine to come up
	I1202 11:46:13.318223   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:13.318677   23379 main.go:141] libmachine: (ha-604935) Found IP for machine: 192.168.39.102
	I1202 11:46:13.318696   23379 main.go:141] libmachine: (ha-604935) Reserving static IP address...
	I1202 11:46:13.318709   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has current primary IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:13.318957   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find host DHCP lease matching {name: "ha-604935", mac: "52:54:00:e0:fa:7c", ip: "192.168.39.102"} in network mk-ha-604935
	I1202 11:46:13.386650   23379 main.go:141] libmachine: (ha-604935) DBG | Getting to WaitForSSH function...
	I1202 11:46:13.386676   23379 main.go:141] libmachine: (ha-604935) Reserved static IP address: 192.168.39.102
	I1202 11:46:13.386705   23379 main.go:141] libmachine: (ha-604935) Waiting for SSH to be available...
	I1202 11:46:13.389178   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:13.389540   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935
	I1202 11:46:13.389567   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find defined IP address of network mk-ha-604935 interface with MAC address 52:54:00:e0:fa:7c
	I1202 11:46:13.389737   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH client type: external
	I1202 11:46:13.389771   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa (-rw-------)
	I1202 11:46:13.389833   23379 main.go:141] libmachine: (ha-604935) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:46:13.389853   23379 main.go:141] libmachine: (ha-604935) DBG | About to run SSH command:
	I1202 11:46:13.389865   23379 main.go:141] libmachine: (ha-604935) DBG | exit 0
	I1202 11:46:13.393280   23379 main.go:141] libmachine: (ha-604935) DBG | SSH cmd err, output: exit status 255: 
	I1202 11:46:13.393302   23379 main.go:141] libmachine: (ha-604935) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1202 11:46:13.393311   23379 main.go:141] libmachine: (ha-604935) DBG | command : exit 0
	I1202 11:46:13.393319   23379 main.go:141] libmachine: (ha-604935) DBG | err     : exit status 255
	I1202 11:46:13.393329   23379 main.go:141] libmachine: (ha-604935) DBG | output  : 
	I1202 11:46:16.395489   23379 main.go:141] libmachine: (ha-604935) DBG | Getting to WaitForSSH function...
	I1202 11:46:16.397696   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.398004   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.398035   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.398057   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH client type: external
	I1202 11:46:16.398092   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa (-rw-------)
	I1202 11:46:16.398150   23379 main.go:141] libmachine: (ha-604935) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:46:16.398173   23379 main.go:141] libmachine: (ha-604935) DBG | About to run SSH command:
	I1202 11:46:16.398186   23379 main.go:141] libmachine: (ha-604935) DBG | exit 0
	I1202 11:46:16.524025   23379 main.go:141] libmachine: (ha-604935) DBG | SSH cmd err, output: <nil>: 
	I1202 11:46:16.524319   23379 main.go:141] libmachine: (ha-604935) KVM machine creation complete!
	I1202 11:46:16.524585   23379 main.go:141] libmachine: (ha-604935) Calling .GetConfigRaw
	I1202 11:46:16.525132   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:16.525296   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:16.525429   23379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:46:16.525444   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:16.526494   23379 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:46:16.526509   23379 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:46:16.526516   23379 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:46:16.526523   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.528453   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.528856   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.528879   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.529035   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.529215   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.529389   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.529537   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.529694   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.529924   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.529940   23379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:46:16.639198   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:46:16.639221   23379 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:46:16.639229   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.641755   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.642065   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.642082   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.642197   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.642389   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.642587   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.642718   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.642866   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.643032   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.643046   23379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:46:16.748649   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:46:16.748721   23379 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:46:16.748732   23379 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:46:16.748738   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:46:16.748943   23379 buildroot.go:166] provisioning hostname "ha-604935"
	I1202 11:46:16.748965   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:46:16.749139   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.751455   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.751828   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.751862   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.751971   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.752141   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.752285   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.752419   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.752578   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.752754   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.752769   23379 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935 && echo "ha-604935" | sudo tee /etc/hostname
	I1202 11:46:16.869057   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935
	
	I1202 11:46:16.869084   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.871187   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.871464   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.871482   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.871651   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.871810   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.871940   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.872049   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.872201   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.872396   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.872412   23379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:46:16.984630   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:46:16.984655   23379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:46:16.984684   23379 buildroot.go:174] setting up certificates
	I1202 11:46:16.984696   23379 provision.go:84] configureAuth start
	I1202 11:46:16.984709   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:46:16.984946   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:16.987426   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.987732   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.987755   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.987901   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.989843   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.990098   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.990122   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.990257   23379 provision.go:143] copyHostCerts
	I1202 11:46:16.990285   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:46:16.990325   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:46:16.990334   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:46:16.990403   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:46:16.990485   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:46:16.990508   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:46:16.990522   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:46:16.990547   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:46:16.990600   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:46:16.990616   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:46:16.990622   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:46:16.990641   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:46:16.990697   23379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935 san=[127.0.0.1 192.168.39.102 ha-604935 localhost minikube]
	I1202 11:46:17.091711   23379 provision.go:177] copyRemoteCerts
	I1202 11:46:17.091762   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:46:17.091783   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.093867   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.094147   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.094176   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.094310   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.094467   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.094595   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.094701   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.178212   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:46:17.178264   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:46:17.201820   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:46:17.201876   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:46:17.224492   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:46:17.224550   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1202 11:46:17.246969   23379 provision.go:87] duration metric: took 262.263543ms to configureAuth
	I1202 11:46:17.246987   23379 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:46:17.247165   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:46:17.247239   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.249583   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.249877   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.249899   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.250032   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.250183   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.250315   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.250423   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.250529   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:17.250670   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:17.250686   23379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:46:17.469650   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:46:17.469676   23379 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:46:17.469685   23379 main.go:141] libmachine: (ha-604935) Calling .GetURL
	I1202 11:46:17.470859   23379 main.go:141] libmachine: (ha-604935) DBG | Using libvirt version 6000000
	I1202 11:46:17.472792   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.473049   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.473078   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.473161   23379 main.go:141] libmachine: Docker is up and running!
	I1202 11:46:17.473172   23379 main.go:141] libmachine: Reticulating splines...
	I1202 11:46:17.473179   23379 client.go:171] duration metric: took 25.91325953s to LocalClient.Create
	I1202 11:46:17.473201   23379 start.go:167] duration metric: took 25.913314916s to libmachine.API.Create "ha-604935"
	I1202 11:46:17.473214   23379 start.go:293] postStartSetup for "ha-604935" (driver="kvm2")
	I1202 11:46:17.473228   23379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:46:17.473243   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.473431   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:46:17.473460   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.475686   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.475977   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.476003   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.476117   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.476292   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.476424   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.476570   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.558504   23379 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:46:17.562731   23379 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:46:17.562753   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:46:17.562801   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:46:17.562870   23379 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:46:17.562886   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:46:17.562973   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:46:17.572589   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:46:17.596338   23379 start.go:296] duration metric: took 123.108175ms for postStartSetup
	I1202 11:46:17.596385   23379 main.go:141] libmachine: (ha-604935) Calling .GetConfigRaw
	I1202 11:46:17.596933   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:17.599535   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.599863   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.599888   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.600036   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:46:17.600197   23379 start.go:128] duration metric: took 26.057419293s to createHost
	I1202 11:46:17.600216   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.602393   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.602679   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.602700   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.602888   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.603033   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.603150   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.603243   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.603351   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:17.603548   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:17.603565   23379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:46:17.708694   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733139977.687468447
	
	I1202 11:46:17.708715   23379 fix.go:216] guest clock: 1733139977.687468447
	I1202 11:46:17.708724   23379 fix.go:229] Guest: 2024-12-02 11:46:17.687468447 +0000 UTC Remote: 2024-12-02 11:46:17.600208028 +0000 UTC m=+26.158965969 (delta=87.260419ms)
	I1202 11:46:17.708747   23379 fix.go:200] guest clock delta is within tolerance: 87.260419ms
	I1202 11:46:17.708757   23379 start.go:83] releasing machines lock for "ha-604935", held for 26.166055586s
	I1202 11:46:17.708779   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.708992   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:17.711541   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.711821   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.711843   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.711972   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.712458   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.712646   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.712736   23379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:46:17.712776   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.712829   23379 ssh_runner.go:195] Run: cat /version.json
	I1202 11:46:17.712853   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.715060   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.715759   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.715798   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.715960   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.716014   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.716187   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.716313   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.716339   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.716347   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.716430   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.716502   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.716582   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.716706   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.716827   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.792614   23379 ssh_runner.go:195] Run: systemctl --version
	I1202 11:46:17.813470   23379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:46:17.973535   23379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:46:17.979920   23379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:46:17.979975   23379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:46:17.995437   23379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:46:17.995459   23379 start.go:495] detecting cgroup driver to use...
	I1202 11:46:17.995503   23379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:46:18.012152   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:46:18.026749   23379 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:46:18.026813   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:46:18.040895   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:46:18.054867   23379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:46:18.182673   23379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:46:18.307537   23379 docker.go:233] disabling docker service ...
	I1202 11:46:18.307608   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:46:18.321854   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:46:18.334016   23379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:46:18.463785   23379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:46:18.581750   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:46:18.594915   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:46:18.612956   23379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:46:18.613013   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.623443   23379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:46:18.623494   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.633789   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.643912   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.654023   23379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:46:18.664581   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.674994   23379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.691561   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.701797   23379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:46:18.711042   23379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:46:18.711090   23379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:46:18.724638   23379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:46:18.733743   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:46:18.862034   23379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:46:18.949557   23379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:46:18.949630   23379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:46:18.954402   23379 start.go:563] Will wait 60s for crictl version
	I1202 11:46:18.954482   23379 ssh_runner.go:195] Run: which crictl
	I1202 11:46:18.958128   23379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:46:18.997454   23379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:46:18.997519   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:46:19.025104   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:46:19.055599   23379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:46:19.056875   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:19.059223   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:19.059530   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:19.059555   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:19.059704   23379 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:46:19.063855   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:46:19.078703   23379 kubeadm.go:883] updating cluster {Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 11:46:19.078793   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:46:19.078828   23379 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:46:19.116305   23379 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 11:46:19.116376   23379 ssh_runner.go:195] Run: which lz4
	I1202 11:46:19.120271   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1202 11:46:19.120778   23379 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 11:46:19.126218   23379 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 11:46:19.126239   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 11:46:20.425373   23379 crio.go:462] duration metric: took 1.305048201s to copy over tarball
	I1202 11:46:20.425452   23379 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 11:46:22.441192   23379 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.01571139s)
	I1202 11:46:22.441225   23379 crio.go:469] duration metric: took 2.015821089s to extract the tarball
	I1202 11:46:22.441233   23379 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 11:46:22.478991   23379 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:46:22.530052   23379 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:46:22.530074   23379 cache_images.go:84] Images are preloaded, skipping loading
	I1202 11:46:22.530083   23379 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.2 crio true true} ...
	I1202 11:46:22.530186   23379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:46:22.530263   23379 ssh_runner.go:195] Run: crio config
	I1202 11:46:22.572985   23379 cni.go:84] Creating CNI manager for ""
	I1202 11:46:22.573005   23379 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1202 11:46:22.573014   23379 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 11:46:22.573034   23379 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-604935 NodeName:ha-604935 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 11:46:22.573152   23379 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-604935"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.102"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 11:46:22.573183   23379 kube-vip.go:115] generating kube-vip config ...
	I1202 11:46:22.573233   23379 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:46:22.589221   23379 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:46:22.589338   23379 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:46:22.589405   23379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:46:22.599190   23379 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:46:22.599242   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 11:46:22.608607   23379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1202 11:46:22.624652   23379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:46:22.640379   23379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1202 11:46:22.655900   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1202 11:46:22.671590   23379 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:46:22.675287   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:46:22.687449   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:46:22.815343   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:46:22.830770   23379 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.102
	I1202 11:46:22.830783   23379 certs.go:194] generating shared ca certs ...
	I1202 11:46:22.830798   23379 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:22.830938   23379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:46:22.830989   23379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:46:22.831001   23379 certs.go:256] generating profile certs ...
	I1202 11:46:22.831074   23379 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:46:22.831100   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt with IP's: []
	I1202 11:46:22.963911   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt ...
	I1202 11:46:22.963935   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt: {Name:mk5750a5db627315b9b01ec40b88a97f880b8d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:22.964093   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key ...
	I1202 11:46:22.964105   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key: {Name:mk12b4799c6c082b6ae6dcb6d50922caccda6be2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:22.964176   23379 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd
	I1202 11:46:22.964216   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I1202 11:46:23.245751   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd ...
	I1202 11:46:23.245777   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd: {Name:mkd02d0517ee36862fb48fa866d0eddc37aac5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.245919   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd ...
	I1202 11:46:23.245934   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd: {Name:mkafae41baf5ffd85374c686e8a6a230d6cd62ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.246014   23379 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:46:23.246102   23379 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:46:23.246163   23379 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:46:23.246178   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt with IP's: []
	I1202 11:46:23.398901   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt ...
	I1202 11:46:23.398937   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt: {Name:mk59ab7004f92d658850310a3f6a84461f824e18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.399105   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key ...
	I1202 11:46:23.399117   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key: {Name:mk4341731ba8ea8693d50dafd7cfc413608c74fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.399195   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:46:23.399214   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:46:23.399232   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:46:23.399248   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:46:23.399263   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:46:23.399278   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:46:23.399293   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:46:23.399307   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:46:23.399357   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:46:23.399393   23379 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:46:23.399404   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:46:23.399426   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:46:23.399453   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:46:23.399485   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:46:23.399528   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:46:23.399560   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.399576   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.399590   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.400135   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:46:23.425287   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:46:23.447899   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:46:23.470786   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:46:23.493867   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 11:46:23.517308   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 11:46:23.540273   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:46:23.562862   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:46:23.587751   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:46:23.615307   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:46:23.645819   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:46:23.670226   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 11:46:23.686120   23379 ssh_runner.go:195] Run: openssl version
	I1202 11:46:23.691724   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:46:23.702611   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.706991   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.707032   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.712771   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:46:23.723671   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:46:23.734402   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.738713   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.738746   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.744060   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:46:23.754804   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:46:23.765363   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.769594   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.769630   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.774953   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:46:23.785412   23379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:46:23.789341   23379 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:46:23.789402   23379 kubeadm.go:392] StartCluster: {Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:46:23.789461   23379 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 11:46:23.789507   23379 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 11:46:23.829185   23379 cri.go:89] found id: ""
	I1202 11:46:23.829258   23379 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 11:46:23.839482   23379 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 11:46:23.849018   23379 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 11:46:23.858723   23379 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 11:46:23.858741   23379 kubeadm.go:157] found existing configuration files:
	
	I1202 11:46:23.858784   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 11:46:23.867813   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 11:46:23.867858   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 11:46:23.877083   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 11:46:23.886137   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 11:46:23.886182   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 11:46:23.895526   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 11:46:23.904513   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 11:46:23.904574   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 11:46:23.913938   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 11:46:23.922913   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 11:46:23.922950   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 11:46:23.932249   23379 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 11:46:24.043553   23379 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 11:46:24.043623   23379 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 11:46:24.150207   23379 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 11:46:24.150352   23379 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 11:46:24.150497   23379 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 11:46:24.159626   23379 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 11:46:24.195667   23379 out.go:235]   - Generating certificates and keys ...
	I1202 11:46:24.195776   23379 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 11:46:24.195834   23379 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 11:46:24.358436   23379 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 11:46:24.683719   23379 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 11:46:24.943667   23379 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 11:46:25.032560   23379 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 11:46:25.140726   23379 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 11:46:25.140883   23379 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-604935 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1202 11:46:25.414720   23379 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 11:46:25.414972   23379 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-604935 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1202 11:46:25.596308   23379 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 11:46:25.682848   23379 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 11:46:25.908682   23379 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 11:46:25.908968   23379 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 11:46:26.057865   23379 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 11:46:26.190529   23379 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 11:46:26.320151   23379 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 11:46:26.522118   23379 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 11:46:26.687579   23379 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 11:46:26.688353   23379 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 11:46:26.693709   23379 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 11:46:26.695397   23379 out.go:235]   - Booting up control plane ...
	I1202 11:46:26.695494   23379 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 11:46:26.695563   23379 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 11:46:26.696118   23379 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 11:46:26.712309   23379 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 11:46:26.721469   23379 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 11:46:26.721525   23379 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 11:46:26.849672   23379 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 11:46:26.849831   23379 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 11:46:27.850918   23379 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001821143s
	I1202 11:46:27.850997   23379 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 11:46:33.482873   23379 kubeadm.go:310] [api-check] The API server is healthy after 5.633037057s
	I1202 11:46:33.492749   23379 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 11:46:33.512336   23379 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 11:46:34.037238   23379 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 11:46:34.037452   23379 kubeadm.go:310] [mark-control-plane] Marking the node ha-604935 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 11:46:34.050856   23379 kubeadm.go:310] [bootstrap-token] Using token: 8kw29b.di3rsap6xz9ot94t
	I1202 11:46:34.052035   23379 out.go:235]   - Configuring RBAC rules ...
	I1202 11:46:34.052182   23379 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 11:46:34.058440   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 11:46:34.073861   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 11:46:34.076499   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 11:46:34.079628   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 11:46:34.084760   23379 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 11:46:34.097556   23379 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 11:46:34.326607   23379 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 11:46:34.887901   23379 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 11:46:34.889036   23379 kubeadm.go:310] 
	I1202 11:46:34.889140   23379 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 11:46:34.889169   23379 kubeadm.go:310] 
	I1202 11:46:34.889273   23379 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 11:46:34.889281   23379 kubeadm.go:310] 
	I1202 11:46:34.889308   23379 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 11:46:34.889389   23379 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 11:46:34.889465   23379 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 11:46:34.889475   23379 kubeadm.go:310] 
	I1202 11:46:34.889554   23379 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 11:46:34.889564   23379 kubeadm.go:310] 
	I1202 11:46:34.889639   23379 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 11:46:34.889649   23379 kubeadm.go:310] 
	I1202 11:46:34.889720   23379 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 11:46:34.889845   23379 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 11:46:34.889909   23379 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 11:46:34.889916   23379 kubeadm.go:310] 
	I1202 11:46:34.889990   23379 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 11:46:34.890073   23379 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 11:46:34.890084   23379 kubeadm.go:310] 
	I1202 11:46:34.890170   23379 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8kw29b.di3rsap6xz9ot94t \
	I1202 11:46:34.890282   23379 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 11:46:34.890321   23379 kubeadm.go:310] 	--control-plane 
	I1202 11:46:34.890328   23379 kubeadm.go:310] 
	I1202 11:46:34.890409   23379 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 11:46:34.890416   23379 kubeadm.go:310] 
	I1202 11:46:34.890483   23379 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8kw29b.di3rsap6xz9ot94t \
	I1202 11:46:34.890568   23379 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 11:46:34.891577   23379 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 11:46:34.891597   23379 cni.go:84] Creating CNI manager for ""
	I1202 11:46:34.891603   23379 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1202 11:46:34.892960   23379 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1202 11:46:34.893988   23379 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 11:46:34.899231   23379 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1202 11:46:34.899255   23379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 11:46:34.917969   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 11:46:35.272118   23379 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 11:46:35.272198   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:35.272259   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-604935 minikube.k8s.io/updated_at=2024_12_02T11_46_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=ha-604935 minikube.k8s.io/primary=true
	I1202 11:46:35.310028   23379 ops.go:34] apiserver oom_adj: -16
	I1202 11:46:35.408095   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:35.908268   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:36.408944   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:36.909158   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:37.408454   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:37.909038   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:38.408700   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:38.908314   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:39.023834   23379 kubeadm.go:1113] duration metric: took 3.751689624s to wait for elevateKubeSystemPrivileges
	I1202 11:46:39.023871   23379 kubeadm.go:394] duration metric: took 15.234471878s to StartCluster
	I1202 11:46:39.023890   23379 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:39.023968   23379 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:46:39.024843   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:39.025096   23379 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:46:39.025129   23379 start.go:241] waiting for startup goroutines ...
	I1202 11:46:39.025139   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 11:46:39.025146   23379 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 11:46:39.025247   23379 addons.go:69] Setting storage-provisioner=true in profile "ha-604935"
	I1202 11:46:39.025268   23379 addons.go:234] Setting addon storage-provisioner=true in "ha-604935"
	I1202 11:46:39.025297   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:46:39.025365   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:46:39.025267   23379 addons.go:69] Setting default-storageclass=true in profile "ha-604935"
	I1202 11:46:39.025420   23379 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-604935"
	I1202 11:46:39.025726   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.025773   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.025867   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.025904   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.040510   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I1202 11:46:39.040567   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I1202 11:46:39.041007   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.041111   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.041500   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.041519   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.041642   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.041669   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.041855   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.042005   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.042156   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:39.042501   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.042547   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.044200   23379 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:46:39.044508   23379 kapi.go:59] client config for ha-604935: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 11:46:39.044954   23379 cert_rotation.go:140] Starting client certificate rotation controller
	I1202 11:46:39.045176   23379 addons.go:234] Setting addon default-storageclass=true in "ha-604935"
	I1202 11:46:39.045212   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:46:39.045509   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.045548   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.056740   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I1202 11:46:39.057180   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.057736   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.057761   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.058043   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.058254   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:39.059103   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I1202 11:46:39.059506   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.059989   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.060003   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.060030   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:39.060305   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.060780   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.060821   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.061507   23379 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 11:46:39.062672   23379 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:46:39.062687   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 11:46:39.062700   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:39.065792   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.066230   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:39.066257   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.066378   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:39.066549   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:39.066694   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:39.066850   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:39.076289   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32769
	I1202 11:46:39.076690   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.077099   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.077122   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.077418   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.077579   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:39.079081   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:39.079273   23379 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 11:46:39.079287   23379 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 11:46:39.079300   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:39.082143   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.082579   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:39.082597   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.082752   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:39.082910   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:39.083074   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:39.083219   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:39.138927   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 11:46:39.202502   23379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:46:39.264780   23379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 11:46:39.722155   23379 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1202 11:46:39.944980   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945000   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945116   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945141   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945269   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945284   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945292   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945298   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945459   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945489   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945500   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945513   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945457   23379 main.go:141] libmachine: (ha-604935) DBG | Closing plugin on server side
	I1202 11:46:39.945578   23379 main.go:141] libmachine: (ha-604935) DBG | Closing plugin on server side
	I1202 11:46:39.945581   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945620   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945796   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945844   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945933   23379 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 11:46:39.945977   23379 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 11:46:39.945813   23379 main.go:141] libmachine: (ha-604935) DBG | Closing plugin on server side
	I1202 11:46:39.946087   23379 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1202 11:46:39.946099   23379 round_trippers.go:469] Request Headers:
	I1202 11:46:39.946109   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:46:39.946117   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:46:39.963939   23379 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1202 11:46:39.964651   23379 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1202 11:46:39.964667   23379 round_trippers.go:469] Request Headers:
	I1202 11:46:39.964677   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:46:39.964684   23379 round_trippers.go:473]     Content-Type: application/json
	I1202 11:46:39.964689   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:46:39.968484   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:46:39.968627   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.968639   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.968886   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.968902   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.970238   23379 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1202 11:46:39.971383   23379 addons.go:510] duration metric: took 946.244666ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 11:46:39.971420   23379 start.go:246] waiting for cluster config update ...
	I1202 11:46:39.971435   23379 start.go:255] writing updated cluster config ...
	I1202 11:46:39.972900   23379 out.go:201] 
	I1202 11:46:39.974083   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:46:39.974147   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:46:39.975564   23379 out.go:177] * Starting "ha-604935-m02" control-plane node in "ha-604935" cluster
	I1202 11:46:39.976682   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:46:39.976701   23379 cache.go:56] Caching tarball of preloaded images
	I1202 11:46:39.976788   23379 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:46:39.976800   23379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:46:39.976872   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:46:39.977100   23379 start.go:360] acquireMachinesLock for ha-604935-m02: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:46:39.977152   23379 start.go:364] duration metric: took 22.26µs to acquireMachinesLock for "ha-604935-m02"
	I1202 11:46:39.977175   23379 start.go:93] Provisioning new machine with config: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:46:39.977250   23379 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1202 11:46:39.978689   23379 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 11:46:39.978765   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.978800   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.993356   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39439
	I1202 11:46:39.993775   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.994235   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.994266   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.994666   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.994881   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:46:39.995033   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:46:39.995225   23379 start.go:159] libmachine.API.Create for "ha-604935" (driver="kvm2")
	I1202 11:46:39.995256   23379 client.go:168] LocalClient.Create starting
	I1202 11:46:39.995293   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:46:39.995339   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:46:39.995364   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:46:39.995433   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:46:39.995460   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:46:39.995482   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:46:39.995508   23379 main.go:141] libmachine: Running pre-create checks...
	I1202 11:46:39.995520   23379 main.go:141] libmachine: (ha-604935-m02) Calling .PreCreateCheck
	I1202 11:46:39.995688   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetConfigRaw
	I1202 11:46:39.996035   23379 main.go:141] libmachine: Creating machine...
	I1202 11:46:39.996049   23379 main.go:141] libmachine: (ha-604935-m02) Calling .Create
	I1202 11:46:39.996158   23379 main.go:141] libmachine: (ha-604935-m02) Creating KVM machine...
	I1202 11:46:39.997515   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found existing default KVM network
	I1202 11:46:39.997667   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found existing private KVM network mk-ha-604935
	I1202 11:46:39.997862   23379 main.go:141] libmachine: (ha-604935-m02) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02 ...
	I1202 11:46:39.997894   23379 main.go:141] libmachine: (ha-604935-m02) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:46:39.997973   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:39.997863   23734 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:46:39.998066   23379 main.go:141] libmachine: (ha-604935-m02) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:46:40.246601   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:40.246459   23734 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa...
	I1202 11:46:40.345704   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:40.345606   23734 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/ha-604935-m02.rawdisk...
	I1202 11:46:40.345732   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Writing magic tar header
	I1202 11:46:40.345746   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Writing SSH key tar header
	I1202 11:46:40.345760   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:40.345732   23734 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02 ...
	I1202 11:46:40.345873   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02
	I1202 11:46:40.345899   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:46:40.345912   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02 (perms=drwx------)
	I1202 11:46:40.345936   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:46:40.345967   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:46:40.345981   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:46:40.345991   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:46:40.346001   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:46:40.346014   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home
	I1202 11:46:40.346025   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Skipping /home - not owner
	I1202 11:46:40.346072   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:46:40.346108   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:46:40.346124   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:46:40.346137   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:46:40.346162   23379 main.go:141] libmachine: (ha-604935-m02) Creating domain...
	I1202 11:46:40.346895   23379 main.go:141] libmachine: (ha-604935-m02) define libvirt domain using xml: 
	I1202 11:46:40.346916   23379 main.go:141] libmachine: (ha-604935-m02) <domain type='kvm'>
	I1202 11:46:40.346942   23379 main.go:141] libmachine: (ha-604935-m02)   <name>ha-604935-m02</name>
	I1202 11:46:40.346957   23379 main.go:141] libmachine: (ha-604935-m02)   <memory unit='MiB'>2200</memory>
	I1202 11:46:40.346974   23379 main.go:141] libmachine: (ha-604935-m02)   <vcpu>2</vcpu>
	I1202 11:46:40.346979   23379 main.go:141] libmachine: (ha-604935-m02)   <features>
	I1202 11:46:40.346986   23379 main.go:141] libmachine: (ha-604935-m02)     <acpi/>
	I1202 11:46:40.346990   23379 main.go:141] libmachine: (ha-604935-m02)     <apic/>
	I1202 11:46:40.346995   23379 main.go:141] libmachine: (ha-604935-m02)     <pae/>
	I1202 11:46:40.347001   23379 main.go:141] libmachine: (ha-604935-m02)     
	I1202 11:46:40.347008   23379 main.go:141] libmachine: (ha-604935-m02)   </features>
	I1202 11:46:40.347027   23379 main.go:141] libmachine: (ha-604935-m02)   <cpu mode='host-passthrough'>
	I1202 11:46:40.347034   23379 main.go:141] libmachine: (ha-604935-m02)   
	I1202 11:46:40.347038   23379 main.go:141] libmachine: (ha-604935-m02)   </cpu>
	I1202 11:46:40.347043   23379 main.go:141] libmachine: (ha-604935-m02)   <os>
	I1202 11:46:40.347049   23379 main.go:141] libmachine: (ha-604935-m02)     <type>hvm</type>
	I1202 11:46:40.347054   23379 main.go:141] libmachine: (ha-604935-m02)     <boot dev='cdrom'/>
	I1202 11:46:40.347060   23379 main.go:141] libmachine: (ha-604935-m02)     <boot dev='hd'/>
	I1202 11:46:40.347066   23379 main.go:141] libmachine: (ha-604935-m02)     <bootmenu enable='no'/>
	I1202 11:46:40.347072   23379 main.go:141] libmachine: (ha-604935-m02)   </os>
	I1202 11:46:40.347077   23379 main.go:141] libmachine: (ha-604935-m02)   <devices>
	I1202 11:46:40.347082   23379 main.go:141] libmachine: (ha-604935-m02)     <disk type='file' device='cdrom'>
	I1202 11:46:40.347089   23379 main.go:141] libmachine: (ha-604935-m02)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/boot2docker.iso'/>
	I1202 11:46:40.347096   23379 main.go:141] libmachine: (ha-604935-m02)       <target dev='hdc' bus='scsi'/>
	I1202 11:46:40.347101   23379 main.go:141] libmachine: (ha-604935-m02)       <readonly/>
	I1202 11:46:40.347105   23379 main.go:141] libmachine: (ha-604935-m02)     </disk>
	I1202 11:46:40.347111   23379 main.go:141] libmachine: (ha-604935-m02)     <disk type='file' device='disk'>
	I1202 11:46:40.347118   23379 main.go:141] libmachine: (ha-604935-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:46:40.347128   23379 main.go:141] libmachine: (ha-604935-m02)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/ha-604935-m02.rawdisk'/>
	I1202 11:46:40.347135   23379 main.go:141] libmachine: (ha-604935-m02)       <target dev='hda' bus='virtio'/>
	I1202 11:46:40.347140   23379 main.go:141] libmachine: (ha-604935-m02)     </disk>
	I1202 11:46:40.347144   23379 main.go:141] libmachine: (ha-604935-m02)     <interface type='network'>
	I1202 11:46:40.347152   23379 main.go:141] libmachine: (ha-604935-m02)       <source network='mk-ha-604935'/>
	I1202 11:46:40.347156   23379 main.go:141] libmachine: (ha-604935-m02)       <model type='virtio'/>
	I1202 11:46:40.347162   23379 main.go:141] libmachine: (ha-604935-m02)     </interface>
	I1202 11:46:40.347167   23379 main.go:141] libmachine: (ha-604935-m02)     <interface type='network'>
	I1202 11:46:40.347172   23379 main.go:141] libmachine: (ha-604935-m02)       <source network='default'/>
	I1202 11:46:40.347178   23379 main.go:141] libmachine: (ha-604935-m02)       <model type='virtio'/>
	I1202 11:46:40.347183   23379 main.go:141] libmachine: (ha-604935-m02)     </interface>
	I1202 11:46:40.347187   23379 main.go:141] libmachine: (ha-604935-m02)     <serial type='pty'>
	I1202 11:46:40.347194   23379 main.go:141] libmachine: (ha-604935-m02)       <target port='0'/>
	I1202 11:46:40.347204   23379 main.go:141] libmachine: (ha-604935-m02)     </serial>
	I1202 11:46:40.347211   23379 main.go:141] libmachine: (ha-604935-m02)     <console type='pty'>
	I1202 11:46:40.347221   23379 main.go:141] libmachine: (ha-604935-m02)       <target type='serial' port='0'/>
	I1202 11:46:40.347236   23379 main.go:141] libmachine: (ha-604935-m02)     </console>
	I1202 11:46:40.347247   23379 main.go:141] libmachine: (ha-604935-m02)     <rng model='virtio'>
	I1202 11:46:40.347255   23379 main.go:141] libmachine: (ha-604935-m02)       <backend model='random'>/dev/random</backend>
	I1202 11:46:40.347264   23379 main.go:141] libmachine: (ha-604935-m02)     </rng>
	I1202 11:46:40.347271   23379 main.go:141] libmachine: (ha-604935-m02)     
	I1202 11:46:40.347282   23379 main.go:141] libmachine: (ha-604935-m02)     
	I1202 11:46:40.347295   23379 main.go:141] libmachine: (ha-604935-m02)   </devices>
	I1202 11:46:40.347306   23379 main.go:141] libmachine: (ha-604935-m02) </domain>
	I1202 11:46:40.347319   23379 main.go:141] libmachine: (ha-604935-m02) 
	I1202 11:46:40.353726   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:2b:bd:2e in network default
	I1202 11:46:40.354276   23379 main.go:141] libmachine: (ha-604935-m02) Ensuring networks are active...
	I1202 11:46:40.354296   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:40.355011   23379 main.go:141] libmachine: (ha-604935-m02) Ensuring network default is active
	I1202 11:46:40.355333   23379 main.go:141] libmachine: (ha-604935-m02) Ensuring network mk-ha-604935 is active
	I1202 11:46:40.355771   23379 main.go:141] libmachine: (ha-604935-m02) Getting domain xml...
	I1202 11:46:40.356531   23379 main.go:141] libmachine: (ha-604935-m02) Creating domain...
	I1202 11:46:41.552192   23379 main.go:141] libmachine: (ha-604935-m02) Waiting to get IP...
	I1202 11:46:41.552923   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:41.553342   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:41.553365   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:41.553311   23734 retry.go:31] will retry after 250.26239ms: waiting for machine to come up
	I1202 11:46:41.804774   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:41.805224   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:41.805252   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:41.805182   23734 retry.go:31] will retry after 337.906383ms: waiting for machine to come up
	I1202 11:46:42.144697   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:42.145141   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:42.145174   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:42.145097   23734 retry.go:31] will retry after 345.416251ms: waiting for machine to come up
	I1202 11:46:42.491650   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:42.492205   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:42.492269   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:42.492187   23734 retry.go:31] will retry after 576.231118ms: waiting for machine to come up
	I1202 11:46:43.069832   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:43.070232   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:43.070258   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:43.070185   23734 retry.go:31] will retry after 484.637024ms: waiting for machine to come up
	I1202 11:46:43.557338   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:43.557918   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:43.557945   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:43.557876   23734 retry.go:31] will retry after 878.448741ms: waiting for machine to come up
	I1202 11:46:44.437501   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:44.437938   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:44.437963   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:44.437910   23734 retry.go:31] will retry after 1.136235758s: waiting for machine to come up
	I1202 11:46:45.575985   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:45.576450   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:45.576493   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:45.576415   23734 retry.go:31] will retry after 1.136366132s: waiting for machine to come up
	I1202 11:46:46.714826   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:46.715252   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:46.715280   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:46.715201   23734 retry.go:31] will retry after 1.737559308s: waiting for machine to come up
	I1202 11:46:48.455006   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:48.455487   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:48.455517   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:48.455436   23734 retry.go:31] will retry after 1.586005802s: waiting for machine to come up
	I1202 11:46:50.042947   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:50.043522   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:50.043548   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:50.043471   23734 retry.go:31] will retry after 1.94342421s: waiting for machine to come up
	I1202 11:46:51.988099   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:51.988615   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:51.988639   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:51.988575   23734 retry.go:31] will retry after 3.527601684s: waiting for machine to come up
	I1202 11:46:55.517564   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:55.518092   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:55.518121   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:55.518041   23734 retry.go:31] will retry after 3.578241105s: waiting for machine to come up
	I1202 11:46:59.097310   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:59.097631   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:59.097651   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:59.097596   23734 retry.go:31] will retry after 5.085934719s: waiting for machine to come up
	I1202 11:47:04.187907   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:04.188401   23379 main.go:141] libmachine: (ha-604935-m02) Found IP for machine: 192.168.39.96
	I1202 11:47:04.188429   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has current primary IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:04.188437   23379 main.go:141] libmachine: (ha-604935-m02) Reserving static IP address...
	I1202 11:47:04.188743   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find host DHCP lease matching {name: "ha-604935-m02", mac: "52:54:00:42:3a:28", ip: "192.168.39.96"} in network mk-ha-604935
	I1202 11:47:04.256531   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Getting to WaitForSSH function...
	I1202 11:47:04.256562   23379 main.go:141] libmachine: (ha-604935-m02) Reserved static IP address: 192.168.39.96
	I1202 11:47:04.256575   23379 main.go:141] libmachine: (ha-604935-m02) Waiting for SSH to be available...
	I1202 11:47:04.258823   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:04.259113   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935
	I1202 11:47:04.259157   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find defined IP address of network mk-ha-604935 interface with MAC address 52:54:00:42:3a:28
	I1202 11:47:04.259288   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH client type: external
	I1202 11:47:04.259308   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa (-rw-------)
	I1202 11:47:04.259373   23379 main.go:141] libmachine: (ha-604935-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:47:04.259397   23379 main.go:141] libmachine: (ha-604935-m02) DBG | About to run SSH command:
	I1202 11:47:04.259411   23379 main.go:141] libmachine: (ha-604935-m02) DBG | exit 0
	I1202 11:47:04.263986   23379 main.go:141] libmachine: (ha-604935-m02) DBG | SSH cmd err, output: exit status 255: 
	I1202 11:47:04.264009   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1202 11:47:04.264016   23379 main.go:141] libmachine: (ha-604935-m02) DBG | command : exit 0
	I1202 11:47:04.264041   23379 main.go:141] libmachine: (ha-604935-m02) DBG | err     : exit status 255
	I1202 11:47:04.264051   23379 main.go:141] libmachine: (ha-604935-m02) DBG | output  : 
	I1202 11:47:07.264654   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Getting to WaitForSSH function...
	I1202 11:47:07.266849   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.267221   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.267249   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.267406   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH client type: external
	I1202 11:47:07.267434   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa (-rw-------)
	I1202 11:47:07.267472   23379 main.go:141] libmachine: (ha-604935-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:47:07.267495   23379 main.go:141] libmachine: (ha-604935-m02) DBG | About to run SSH command:
	I1202 11:47:07.267507   23379 main.go:141] libmachine: (ha-604935-m02) DBG | exit 0
	I1202 11:47:07.391931   23379 main.go:141] libmachine: (ha-604935-m02) DBG | SSH cmd err, output: <nil>: 
	I1202 11:47:07.392120   23379 main.go:141] libmachine: (ha-604935-m02) KVM machine creation complete!
	I1202 11:47:07.392498   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetConfigRaw
	I1202 11:47:07.393039   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:07.393215   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:07.393337   23379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:47:07.393354   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetState
	I1202 11:47:07.394565   23379 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:47:07.394578   23379 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:47:07.394584   23379 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:47:07.394589   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.396709   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.397006   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.397033   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.397522   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.398890   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.399081   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.399216   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.399356   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.399544   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.399555   23379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:47:07.503380   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:47:07.503409   23379 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:47:07.503420   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.506083   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.506469   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.506502   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.506641   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.506811   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.506958   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.507087   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.507236   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.507398   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.507407   23379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:47:07.612741   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:47:07.612843   23379 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:47:07.612858   23379 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:47:07.612872   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:47:07.613105   23379 buildroot.go:166] provisioning hostname "ha-604935-m02"
	I1202 11:47:07.613126   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:47:07.613280   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.615682   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.616001   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.616029   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.616193   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.616355   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.616496   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.616615   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.616752   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.616925   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.616942   23379 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935-m02 && echo "ha-604935-m02" | sudo tee /etc/hostname
	I1202 11:47:07.739596   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935-m02
	
	I1202 11:47:07.739622   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.742125   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.742500   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.742532   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.742709   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.742872   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.743043   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.743173   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.743334   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.743539   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.743561   23379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:47:07.857236   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:47:07.857259   23379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:47:07.857284   23379 buildroot.go:174] setting up certificates
	I1202 11:47:07.857292   23379 provision.go:84] configureAuth start
	I1202 11:47:07.857300   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:47:07.857527   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:07.860095   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.860513   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.860543   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.860692   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.862585   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.862958   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.862988   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.863114   23379 provision.go:143] copyHostCerts
	I1202 11:47:07.863150   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:47:07.863186   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:47:07.863197   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:47:07.863272   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:47:07.863374   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:47:07.863401   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:47:07.863412   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:47:07.863452   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:47:07.863528   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:47:07.863553   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:47:07.863563   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:47:07.863595   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:47:07.863674   23379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935-m02 san=[127.0.0.1 192.168.39.96 ha-604935-m02 localhost minikube]
	I1202 11:47:08.103724   23379 provision.go:177] copyRemoteCerts
	I1202 11:47:08.103779   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:47:08.103802   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.106490   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.106829   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.106859   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.107025   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.107200   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.107328   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.107425   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.190303   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:47:08.190378   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:47:08.217749   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:47:08.217812   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:47:08.240576   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:47:08.240626   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:47:08.263351   23379 provision.go:87] duration metric: took 406.049409ms to configureAuth
	I1202 11:47:08.263374   23379 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:47:08.263549   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:08.263627   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.266183   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.266506   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.266542   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.266657   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.266822   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.266953   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.267045   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.267212   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:08.267440   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:08.267458   23379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:47:08.480702   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:47:08.480726   23379 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:47:08.480737   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetURL
	I1202 11:47:08.481946   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using libvirt version 6000000
	I1202 11:47:08.484074   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.484465   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.484486   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.484652   23379 main.go:141] libmachine: Docker is up and running!
	I1202 11:47:08.484665   23379 main.go:141] libmachine: Reticulating splines...
	I1202 11:47:08.484672   23379 client.go:171] duration metric: took 28.489409707s to LocalClient.Create
	I1202 11:47:08.484691   23379 start.go:167] duration metric: took 28.489467042s to libmachine.API.Create "ha-604935"
	I1202 11:47:08.484701   23379 start.go:293] postStartSetup for "ha-604935-m02" (driver="kvm2")
	I1202 11:47:08.484710   23379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:47:08.484726   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.484947   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:47:08.484979   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.487275   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.487627   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.487652   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.487763   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.487916   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.488023   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.488157   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.570418   23379 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:47:08.574644   23379 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:47:08.574668   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:47:08.574734   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:47:08.574834   23379 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:47:08.574847   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:47:08.574955   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:47:08.584296   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:47:08.607137   23379 start.go:296] duration metric: took 122.426316ms for postStartSetup
	I1202 11:47:08.607176   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetConfigRaw
	I1202 11:47:08.607688   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:08.609787   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.610122   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.610140   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.610348   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:47:08.610507   23379 start.go:128] duration metric: took 28.633177558s to createHost
	I1202 11:47:08.610528   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.612576   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.612933   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.612958   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.613094   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.613256   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.613387   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.613495   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.613675   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:08.613819   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:08.613829   23379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:47:08.721072   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733140028.701362667
	
	I1202 11:47:08.721095   23379 fix.go:216] guest clock: 1733140028.701362667
	I1202 11:47:08.721104   23379 fix.go:229] Guest: 2024-12-02 11:47:08.701362667 +0000 UTC Remote: 2024-12-02 11:47:08.610518479 +0000 UTC m=+77.169276420 (delta=90.844188ms)
	I1202 11:47:08.721123   23379 fix.go:200] guest clock delta is within tolerance: 90.844188ms
	I1202 11:47:08.721129   23379 start.go:83] releasing machines lock for "ha-604935-m02", held for 28.743964366s
	I1202 11:47:08.721146   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.721362   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:08.723610   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.723892   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.723917   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.725920   23379 out.go:177] * Found network options:
	I1202 11:47:08.727151   23379 out.go:177]   - NO_PROXY=192.168.39.102
	W1202 11:47:08.728253   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:47:08.728295   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.728718   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.728888   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.728964   23379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:47:08.729018   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	W1202 11:47:08.729077   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:47:08.729140   23379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:47:08.729159   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.731377   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.731690   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.731736   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.731757   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.731905   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.732089   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.732138   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.732161   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.732263   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.732335   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.732412   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.732482   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.732622   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.732772   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.961089   23379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:47:08.967388   23379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:47:08.967456   23379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:47:08.983898   23379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:47:08.983919   23379 start.go:495] detecting cgroup driver to use...
	I1202 11:47:08.983976   23379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:47:08.999755   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:47:09.012969   23379 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:47:09.013013   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:47:09.025774   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:47:09.038595   23379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:47:09.155525   23379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:47:09.315590   23379 docker.go:233] disabling docker service ...
	I1202 11:47:09.315645   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:47:09.329428   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:47:09.341852   23379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:47:09.455987   23379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:47:09.568119   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:47:09.581349   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:47:09.599069   23379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:47:09.599131   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.609102   23379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:47:09.609172   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.619619   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.629809   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.640881   23379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:47:09.650894   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.660662   23379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.676866   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.687794   23379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:47:09.696987   23379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:47:09.697035   23379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:47:09.709512   23379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:47:09.718617   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:47:09.833443   23379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:47:09.924039   23379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:47:09.924108   23379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:47:09.929102   23379 start.go:563] Will wait 60s for crictl version
	I1202 11:47:09.929151   23379 ssh_runner.go:195] Run: which crictl
	I1202 11:47:09.932909   23379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:47:09.970799   23379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:47:09.970857   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:47:09.997925   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:47:10.026009   23379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:47:10.027185   23379 out.go:177]   - env NO_PROXY=192.168.39.102
	I1202 11:47:10.028209   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:10.030558   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:10.030843   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:10.030865   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:10.031081   23379 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:47:10.034913   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:47:10.046993   23379 mustload.go:65] Loading cluster: ha-604935
	I1202 11:47:10.047168   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:10.047464   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:10.047509   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:10.061535   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39823
	I1202 11:47:10.061962   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:10.062500   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:10.062519   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:10.062832   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:10.062993   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:47:10.064396   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:47:10.064646   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:10.064674   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:10.078237   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I1202 11:47:10.078536   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:10.078918   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:10.078933   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:10.079205   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:10.079368   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:47:10.079517   23379 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.96
	I1202 11:47:10.079528   23379 certs.go:194] generating shared ca certs ...
	I1202 11:47:10.079548   23379 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:47:10.079686   23379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:47:10.079733   23379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:47:10.079746   23379 certs.go:256] generating profile certs ...
	I1202 11:47:10.079838   23379 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:47:10.079869   23379 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3
	I1202 11:47:10.079889   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.96 192.168.39.254]
	I1202 11:47:10.265166   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3 ...
	I1202 11:47:10.265189   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3: {Name:mkdd0b8b1421fc39bdc7a4c81c195bce0584f3e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:47:10.265365   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3 ...
	I1202 11:47:10.265383   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3: {Name:mk317f3cb02e9fefc92b2802c6865b7da9a08a71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:47:10.265473   23379 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:47:10.265636   23379 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:47:10.265813   23379 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:47:10.265832   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:47:10.265850   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:47:10.265871   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:47:10.265888   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:47:10.265904   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:47:10.265920   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:47:10.265936   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:47:10.265955   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:47:10.266021   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:47:10.266059   23379 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:47:10.266073   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:47:10.266106   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:47:10.266137   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:47:10.266166   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:47:10.266222   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:47:10.266260   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.266282   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.266301   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.266341   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:47:10.268885   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:10.269241   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:47:10.269271   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:10.269395   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:47:10.269566   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:47:10.269669   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:47:10.269777   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:47:10.344538   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 11:47:10.349538   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 11:47:10.360402   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 11:47:10.364479   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 11:47:10.374445   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 11:47:10.378811   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 11:47:10.389170   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 11:47:10.392986   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1202 11:47:10.403485   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 11:47:10.408617   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 11:47:10.418394   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 11:47:10.422245   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 11:47:10.432316   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:47:10.458960   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:47:10.483156   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:47:10.505724   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:47:10.527955   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1202 11:47:10.550812   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 11:47:10.573508   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:47:10.595760   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:47:10.618337   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:47:10.641184   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:47:10.663681   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:47:10.687678   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 11:47:10.703651   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 11:47:10.719297   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 11:47:10.734755   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1202 11:47:10.751060   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 11:47:10.767295   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 11:47:10.783201   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 11:47:10.798776   23379 ssh_runner.go:195] Run: openssl version
	I1202 11:47:10.804781   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:47:10.814853   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.819107   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.819150   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.824680   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:47:10.834444   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:47:10.847333   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.852096   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.852141   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.857456   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:47:10.867671   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:47:10.878797   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.883014   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.883050   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.888463   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:47:10.900014   23379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:47:10.903987   23379 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:47:10.904033   23379 kubeadm.go:934] updating node {m02 192.168.39.96 8443 v1.31.2 crio true true} ...
	I1202 11:47:10.904108   23379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:47:10.904143   23379 kube-vip.go:115] generating kube-vip config ...
	I1202 11:47:10.904172   23379 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:47:10.920663   23379 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:47:10.920727   23379 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:47:10.920782   23379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:47:10.929813   23379 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1202 11:47:10.929869   23379 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1202 11:47:10.938939   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1202 11:47:10.938963   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:47:10.939004   23379 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1202 11:47:10.939023   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:47:10.939098   23379 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1202 11:47:10.943516   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1202 11:47:10.943543   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1202 11:47:11.580278   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:47:11.580378   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:47:11.585380   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1202 11:47:11.585410   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1202 11:47:11.699996   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:47:11.746001   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:47:11.746098   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:47:11.755160   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1202 11:47:11.755193   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1202 11:47:12.167193   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 11:47:12.177362   23379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1202 11:47:12.193477   23379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:47:12.209277   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1202 11:47:12.225224   23379 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:47:12.229096   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:47:12.241465   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:47:12.355965   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:47:12.372721   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:47:12.373199   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:12.373246   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:12.387521   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I1202 11:47:12.387950   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:12.388471   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:12.388495   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:12.388817   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:12.389008   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:47:12.389136   23379 start.go:317] joinCluster: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:47:12.389250   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1202 11:47:12.389272   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:47:12.391559   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:12.391918   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:47:12.391947   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:12.392078   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:47:12.392244   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:47:12.392404   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:47:12.392523   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:47:12.542455   23379 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:47:12.542510   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 781q3h.dri7zuf7dlr9vool --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m02 --control-plane --apiserver-advertise-address=192.168.39.96 --apiserver-bind-port=8443"
	I1202 11:47:33.298276   23379 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 781q3h.dri7zuf7dlr9vool --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m02 --control-plane --apiserver-advertise-address=192.168.39.96 --apiserver-bind-port=8443": (20.75572497s)
	I1202 11:47:33.298324   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1202 11:47:33.868140   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-604935-m02 minikube.k8s.io/updated_at=2024_12_02T11_47_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=ha-604935 minikube.k8s.io/primary=false
	I1202 11:47:34.014505   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-604935-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1202 11:47:34.151913   23379 start.go:319] duration metric: took 21.762775302s to joinCluster
	I1202 11:47:34.151988   23379 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:47:34.152289   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:34.153405   23379 out.go:177] * Verifying Kubernetes components...
	I1202 11:47:34.154583   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:47:34.458218   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:47:34.537753   23379 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:47:34.537985   23379 kapi.go:59] client config for ha-604935: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 11:47:34.538049   23379 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1202 11:47:34.538237   23379 node_ready.go:35] waiting up to 6m0s for node "ha-604935-m02" to be "Ready" ...
	I1202 11:47:34.538328   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:34.538338   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:34.538353   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:34.538361   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:34.553164   23379 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1202 11:47:35.038636   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:35.038655   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:35.038663   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:35.038667   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:35.043410   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:35.539240   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:35.539268   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:35.539288   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:35.539295   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:35.543768   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:36.038477   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:36.038500   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:36.038510   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:36.038514   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:36.044852   23379 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1202 11:47:36.539264   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:36.539282   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:36.539291   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:36.539294   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:36.541884   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:36.542608   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:37.039323   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:37.039344   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:37.039355   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:37.039363   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:37.042762   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:37.539267   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:37.539288   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:37.539298   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:37.539302   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:37.542085   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:38.039187   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:38.039205   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:38.039213   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:38.039217   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:38.042510   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:38.538564   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:38.538590   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:38.538602   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:38.538607   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:38.543229   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:38.543842   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:39.039431   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:39.039454   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:39.039465   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:39.039470   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:39.043101   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:39.538521   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:39.538548   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:39.538559   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:39.538565   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:39.544151   23379 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:47:40.039125   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:40.039142   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:40.039150   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:40.039155   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:40.041928   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:40.539447   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:40.539466   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:40.539477   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:40.539482   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:40.542088   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:41.039165   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:41.039194   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:41.039206   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:41.039214   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:41.042019   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:41.042646   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:41.538430   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:41.538449   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:41.538456   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:41.538460   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:41.541300   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:42.038543   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:42.038564   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:42.038574   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:42.038579   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:42.042807   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:42.539123   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:42.539144   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:42.539155   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:42.539168   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:42.615775   23379 round_trippers.go:574] Response Status: 200 OK in 76 milliseconds
	I1202 11:47:43.038628   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:43.038651   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:43.038660   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:43.038670   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:43.041582   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:43.538519   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:43.538548   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:43.538559   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:43.538566   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:43.542876   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:43.543448   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:44.038473   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:44.038493   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:44.038501   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:44.038506   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:44.041916   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:44.538909   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:44.538934   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:44.538946   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:44.538954   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:44.542475   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:45.039019   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:45.039039   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:45.039046   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:45.039050   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:45.042662   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:45.539381   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:45.539404   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:45.539414   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:45.539419   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:45.543229   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:45.544177   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:46.038600   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:46.038622   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:46.038630   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:46.038635   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:46.041460   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:46.538597   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:46.538618   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:46.538628   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:46.538632   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:46.541444   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:47.038797   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:47.038817   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:47.038825   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:47.038828   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:47.041962   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:47.539440   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:47.539463   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:47.539470   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:47.539474   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:47.543115   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:48.039282   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:48.039306   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:48.039316   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:48.039320   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:48.042491   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:48.043162   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:48.539348   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:48.539372   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:48.539382   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:48.539387   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:48.542583   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:49.038466   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:49.038485   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.038493   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.038498   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.041480   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.539130   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:49.539151   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.539162   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.539166   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.542870   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:49.543570   23379 node_ready.go:49] node "ha-604935-m02" has status "Ready":"True"
	I1202 11:47:49.543589   23379 node_ready.go:38] duration metric: took 15.005336835s for node "ha-604935-m02" to be "Ready" ...
	I1202 11:47:49.543598   23379 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:47:49.543686   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:49.543695   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.543702   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.543707   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.548022   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:49.557050   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.557145   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5gcc2
	I1202 11:47:49.557159   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.557169   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.557181   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.561541   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:49.562194   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:49.562212   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.562222   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.562229   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.564378   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.564821   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:49.564836   23379 pod_ready.go:82] duration metric: took 7.7579ms for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.564845   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.564897   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-g48q9
	I1202 11:47:49.564905   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.564912   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.564919   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.566980   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.567489   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:49.567501   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.567509   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.567514   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.569545   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.570321   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:49.570337   23379 pod_ready.go:82] duration metric: took 5.482367ms for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.570346   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.570395   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935
	I1202 11:47:49.570402   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.570408   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.570416   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.572224   23379 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:47:49.572830   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:49.572845   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.572852   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.572856   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.574847   23379 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:47:49.575387   23379 pod_ready.go:93] pod "etcd-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:49.575407   23379 pod_ready.go:82] duration metric: took 5.05521ms for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.575417   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.575471   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:49.575482   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.575492   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.575497   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.577559   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.578025   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:49.578036   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.578042   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.578046   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.580244   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:50.075930   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:50.075955   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.075967   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.075972   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.078932   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:50.079644   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:50.079660   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.079671   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.079679   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.083049   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:50.576373   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:50.576396   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.576404   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.576408   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.579581   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:50.580413   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:50.580428   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.580435   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.580439   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.582674   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.075671   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:51.075692   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.075700   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.075705   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.080547   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:51.081109   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:51.081140   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.081151   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.081159   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.083775   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.084570   23379 pod_ready.go:93] pod "etcd-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.084587   23379 pod_ready.go:82] duration metric: took 1.509162413s for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.084605   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.084654   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935
	I1202 11:47:51.084661   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.084668   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.084676   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.086997   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.139895   23379 request.go:632] Waited for 52.198749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.139936   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.139941   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.139948   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.139954   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.142459   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.143143   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.143164   23379 pod_ready.go:82] duration metric: took 58.549955ms for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.143176   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.339592   23379 request.go:632] Waited for 196.342057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:47:51.339640   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:47:51.339648   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.339657   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.339665   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.342939   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:51.539862   23379 request.go:632] Waited for 196.164588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:51.539931   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:51.539935   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.539943   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.539950   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.543209   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:51.543865   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.543882   23379 pod_ready.go:82] duration metric: took 400.698772ms for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.543892   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.739144   23379 request.go:632] Waited for 195.19473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:47:51.739219   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:47:51.739235   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.739245   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.739249   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.741900   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.940184   23379 request.go:632] Waited for 197.361013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.940269   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.940278   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.940285   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.940289   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.943128   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.943706   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.943727   23379 pod_ready.go:82] duration metric: took 399.828238ms for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.943741   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.139832   23379 request.go:632] Waited for 196.024828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:47:52.139897   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:47:52.139908   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.139915   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.139922   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.143273   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:52.339296   23379 request.go:632] Waited for 195.254025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:52.339366   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:52.339382   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.339392   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.339396   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.343086   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:52.343632   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:52.343651   23379 pod_ready.go:82] duration metric: took 399.901549ms for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.343664   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.540119   23379 request.go:632] Waited for 196.382954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:47:52.540208   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:47:52.540223   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.540246   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.540254   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.544789   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:52.739964   23379 request.go:632] Waited for 194.383281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:52.740029   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:52.740036   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.740047   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.740056   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.744675   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:52.745274   23379 pod_ready.go:93] pod "kube-proxy-tqcb6" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:52.745291   23379 pod_ready.go:82] duration metric: took 401.620034ms for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.745302   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.939398   23379 request.go:632] Waited for 194.014981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:47:52.939448   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:47:52.939453   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.939460   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.939466   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.942473   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:53.139562   23379 request.go:632] Waited for 196.368019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.139626   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.139631   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.139639   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.139642   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.142786   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.143361   23379 pod_ready.go:93] pod "kube-proxy-w9r4x" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:53.143382   23379 pod_ready.go:82] duration metric: took 398.068666ms for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.143391   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.339501   23379 request.go:632] Waited for 196.04496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:47:53.339586   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:47:53.339596   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.339607   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.339618   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.343080   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.540159   23379 request.go:632] Waited for 196.184742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:53.540226   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:53.540246   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.540255   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.540261   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.543534   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.544454   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:53.544479   23379 pod_ready.go:82] duration metric: took 401.077052ms for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.544494   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.739453   23379 request.go:632] Waited for 194.878612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:47:53.739540   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:47:53.739557   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.739572   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.739583   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.743318   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.939180   23379 request.go:632] Waited for 195.280753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.939245   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.939250   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.939258   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.939265   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.943381   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:53.944067   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:53.944085   23379 pod_ready.go:82] duration metric: took 399.577551ms for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.944099   23379 pod_ready.go:39] duration metric: took 4.40047197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:47:53.944119   23379 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:47:53.944173   23379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:47:53.960762   23379 api_server.go:72] duration metric: took 19.808744771s to wait for apiserver process to appear ...
	I1202 11:47:53.960781   23379 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:47:53.960802   23379 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1202 11:47:53.965634   23379 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1202 11:47:53.965695   23379 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1202 11:47:53.965706   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.965717   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.965727   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.966539   23379 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1202 11:47:53.966644   23379 api_server.go:141] control plane version: v1.31.2
	I1202 11:47:53.966664   23379 api_server.go:131] duration metric: took 5.87665ms to wait for apiserver health ...
	I1202 11:47:53.966674   23379 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:47:54.140116   23379 request.go:632] Waited for 173.370822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.140184   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.140192   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.140203   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.140213   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.144688   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:54.150151   23379 system_pods.go:59] 17 kube-system pods found
	I1202 11:47:54.150175   23379 system_pods.go:61] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:47:54.150180   23379 system_pods.go:61] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:47:54.150184   23379 system_pods.go:61] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:47:54.150187   23379 system_pods.go:61] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:47:54.150190   23379 system_pods.go:61] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:47:54.150193   23379 system_pods.go:61] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:47:54.150196   23379 system_pods.go:61] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:47:54.150200   23379 system_pods.go:61] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:47:54.150204   23379 system_pods.go:61] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:47:54.150208   23379 system_pods.go:61] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:47:54.150213   23379 system_pods.go:61] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:47:54.150216   23379 system_pods.go:61] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:47:54.150222   23379 system_pods.go:61] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:47:54.150225   23379 system_pods.go:61] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:47:54.150228   23379 system_pods.go:61] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:47:54.150230   23379 system_pods.go:61] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:47:54.150234   23379 system_pods.go:61] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:47:54.150239   23379 system_pods.go:74] duration metric: took 183.556674ms to wait for pod list to return data ...
	I1202 11:47:54.150248   23379 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:47:54.339686   23379 request.go:632] Waited for 189.36849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:47:54.339740   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:47:54.339744   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.339751   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.339755   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.343135   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:54.343361   23379 default_sa.go:45] found service account: "default"
	I1202 11:47:54.343386   23379 default_sa.go:55] duration metric: took 193.131705ms for default service account to be created ...
	I1202 11:47:54.343397   23379 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:47:54.539835   23379 request.go:632] Waited for 196.371965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.539931   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.539943   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.539954   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.539964   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.544943   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:54.550739   23379 system_pods.go:86] 17 kube-system pods found
	I1202 11:47:54.550763   23379 system_pods.go:89] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:47:54.550769   23379 system_pods.go:89] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:47:54.550775   23379 system_pods.go:89] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:47:54.550778   23379 system_pods.go:89] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:47:54.550809   23379 system_pods.go:89] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:47:54.550819   23379 system_pods.go:89] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:47:54.550824   23379 system_pods.go:89] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:47:54.550829   23379 system_pods.go:89] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:47:54.550833   23379 system_pods.go:89] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:47:54.550837   23379 system_pods.go:89] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:47:54.550841   23379 system_pods.go:89] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:47:54.550848   23379 system_pods.go:89] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:47:54.550852   23379 system_pods.go:89] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:47:54.550857   23379 system_pods.go:89] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:47:54.550862   23379 system_pods.go:89] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:47:54.550867   23379 system_pods.go:89] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:47:54.550870   23379 system_pods.go:89] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:47:54.550878   23379 system_pods.go:126] duration metric: took 207.476252ms to wait for k8s-apps to be running ...
	I1202 11:47:54.550887   23379 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:47:54.550927   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:47:54.567143   23379 system_svc.go:56] duration metric: took 16.250371ms WaitForService to wait for kubelet
	I1202 11:47:54.567163   23379 kubeadm.go:582] duration metric: took 20.415147049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:47:54.567180   23379 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:47:54.739589   23379 request.go:632] Waited for 172.338353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1202 11:47:54.739668   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1202 11:47:54.739675   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.739683   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.739688   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.743346   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:54.744125   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:47:54.744152   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:47:54.744165   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:47:54.744170   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:47:54.744177   23379 node_conditions.go:105] duration metric: took 176.990456ms to run NodePressure ...
	I1202 11:47:54.744190   23379 start.go:241] waiting for startup goroutines ...
	I1202 11:47:54.744223   23379 start.go:255] writing updated cluster config ...
	I1202 11:47:54.746253   23379 out.go:201] 
	I1202 11:47:54.747593   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:54.747718   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:47:54.749358   23379 out.go:177] * Starting "ha-604935-m03" control-plane node in "ha-604935" cluster
	I1202 11:47:54.750410   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:47:54.750433   23379 cache.go:56] Caching tarball of preloaded images
	I1202 11:47:54.750533   23379 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:47:54.750548   23379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:47:54.750643   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:47:54.750878   23379 start.go:360] acquireMachinesLock for ha-604935-m03: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:47:54.750923   23379 start.go:364] duration metric: took 26.206µs to acquireMachinesLock for "ha-604935-m03"
	I1202 11:47:54.750944   23379 start.go:93] Provisioning new machine with config: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:47:54.751067   23379 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1202 11:47:54.752864   23379 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 11:47:54.752946   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:54.752986   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:54.767584   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I1202 11:47:54.767916   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:54.768481   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:54.768505   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:54.768819   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:54.768991   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:47:54.769125   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:47:54.769335   23379 start.go:159] libmachine.API.Create for "ha-604935" (driver="kvm2")
	I1202 11:47:54.769376   23379 client.go:168] LocalClient.Create starting
	I1202 11:47:54.769409   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:47:54.769445   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:47:54.769469   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:47:54.769535   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:47:54.769563   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:47:54.769581   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:47:54.769610   23379 main.go:141] libmachine: Running pre-create checks...
	I1202 11:47:54.769622   23379 main.go:141] libmachine: (ha-604935-m03) Calling .PreCreateCheck
	I1202 11:47:54.769820   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetConfigRaw
	I1202 11:47:54.770184   23379 main.go:141] libmachine: Creating machine...
	I1202 11:47:54.770198   23379 main.go:141] libmachine: (ha-604935-m03) Calling .Create
	I1202 11:47:54.770317   23379 main.go:141] libmachine: (ha-604935-m03) Creating KVM machine...
	I1202 11:47:54.771476   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found existing default KVM network
	I1202 11:47:54.771588   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found existing private KVM network mk-ha-604935
	I1202 11:47:54.771715   23379 main.go:141] libmachine: (ha-604935-m03) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03 ...
	I1202 11:47:54.771731   23379 main.go:141] libmachine: (ha-604935-m03) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:47:54.771824   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:54.771717   24139 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:47:54.771925   23379 main.go:141] libmachine: (ha-604935-m03) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:47:55.025734   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:55.025618   24139 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa...
	I1202 11:47:55.125359   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:55.125265   24139 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/ha-604935-m03.rawdisk...
	I1202 11:47:55.125386   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Writing magic tar header
	I1202 11:47:55.125397   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Writing SSH key tar header
	I1202 11:47:55.125407   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:55.125384   24139 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03 ...
	I1202 11:47:55.125541   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03
	I1202 11:47:55.125572   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:47:55.125586   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03 (perms=drwx------)
	I1202 11:47:55.125605   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:47:55.125622   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:47:55.125634   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:47:55.125649   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:47:55.125663   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:47:55.125683   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:47:55.125697   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:47:55.125710   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:47:55.125719   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home
	I1202 11:47:55.125733   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:47:55.125745   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Skipping /home - not owner
	I1202 11:47:55.125754   23379 main.go:141] libmachine: (ha-604935-m03) Creating domain...
	I1202 11:47:55.126629   23379 main.go:141] libmachine: (ha-604935-m03) define libvirt domain using xml: 
	I1202 11:47:55.126649   23379 main.go:141] libmachine: (ha-604935-m03) <domain type='kvm'>
	I1202 11:47:55.126659   23379 main.go:141] libmachine: (ha-604935-m03)   <name>ha-604935-m03</name>
	I1202 11:47:55.126667   23379 main.go:141] libmachine: (ha-604935-m03)   <memory unit='MiB'>2200</memory>
	I1202 11:47:55.126675   23379 main.go:141] libmachine: (ha-604935-m03)   <vcpu>2</vcpu>
	I1202 11:47:55.126685   23379 main.go:141] libmachine: (ha-604935-m03)   <features>
	I1202 11:47:55.126693   23379 main.go:141] libmachine: (ha-604935-m03)     <acpi/>
	I1202 11:47:55.126701   23379 main.go:141] libmachine: (ha-604935-m03)     <apic/>
	I1202 11:47:55.126706   23379 main.go:141] libmachine: (ha-604935-m03)     <pae/>
	I1202 11:47:55.126709   23379 main.go:141] libmachine: (ha-604935-m03)     
	I1202 11:47:55.126714   23379 main.go:141] libmachine: (ha-604935-m03)   </features>
	I1202 11:47:55.126721   23379 main.go:141] libmachine: (ha-604935-m03)   <cpu mode='host-passthrough'>
	I1202 11:47:55.126745   23379 main.go:141] libmachine: (ha-604935-m03)   
	I1202 11:47:55.126763   23379 main.go:141] libmachine: (ha-604935-m03)   </cpu>
	I1202 11:47:55.126773   23379 main.go:141] libmachine: (ha-604935-m03)   <os>
	I1202 11:47:55.126780   23379 main.go:141] libmachine: (ha-604935-m03)     <type>hvm</type>
	I1202 11:47:55.126791   23379 main.go:141] libmachine: (ha-604935-m03)     <boot dev='cdrom'/>
	I1202 11:47:55.126796   23379 main.go:141] libmachine: (ha-604935-m03)     <boot dev='hd'/>
	I1202 11:47:55.126808   23379 main.go:141] libmachine: (ha-604935-m03)     <bootmenu enable='no'/>
	I1202 11:47:55.126817   23379 main.go:141] libmachine: (ha-604935-m03)   </os>
	I1202 11:47:55.126827   23379 main.go:141] libmachine: (ha-604935-m03)   <devices>
	I1202 11:47:55.126837   23379 main.go:141] libmachine: (ha-604935-m03)     <disk type='file' device='cdrom'>
	I1202 11:47:55.126849   23379 main.go:141] libmachine: (ha-604935-m03)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/boot2docker.iso'/>
	I1202 11:47:55.126860   23379 main.go:141] libmachine: (ha-604935-m03)       <target dev='hdc' bus='scsi'/>
	I1202 11:47:55.126869   23379 main.go:141] libmachine: (ha-604935-m03)       <readonly/>
	I1202 11:47:55.126878   23379 main.go:141] libmachine: (ha-604935-m03)     </disk>
	I1202 11:47:55.126888   23379 main.go:141] libmachine: (ha-604935-m03)     <disk type='file' device='disk'>
	I1202 11:47:55.126904   23379 main.go:141] libmachine: (ha-604935-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:47:55.126929   23379 main.go:141] libmachine: (ha-604935-m03)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/ha-604935-m03.rawdisk'/>
	I1202 11:47:55.126949   23379 main.go:141] libmachine: (ha-604935-m03)       <target dev='hda' bus='virtio'/>
	I1202 11:47:55.126958   23379 main.go:141] libmachine: (ha-604935-m03)     </disk>
	I1202 11:47:55.126972   23379 main.go:141] libmachine: (ha-604935-m03)     <interface type='network'>
	I1202 11:47:55.126984   23379 main.go:141] libmachine: (ha-604935-m03)       <source network='mk-ha-604935'/>
	I1202 11:47:55.126990   23379 main.go:141] libmachine: (ha-604935-m03)       <model type='virtio'/>
	I1202 11:47:55.127001   23379 main.go:141] libmachine: (ha-604935-m03)     </interface>
	I1202 11:47:55.127011   23379 main.go:141] libmachine: (ha-604935-m03)     <interface type='network'>
	I1202 11:47:55.127022   23379 main.go:141] libmachine: (ha-604935-m03)       <source network='default'/>
	I1202 11:47:55.127039   23379 main.go:141] libmachine: (ha-604935-m03)       <model type='virtio'/>
	I1202 11:47:55.127046   23379 main.go:141] libmachine: (ha-604935-m03)     </interface>
	I1202 11:47:55.127054   23379 main.go:141] libmachine: (ha-604935-m03)     <serial type='pty'>
	I1202 11:47:55.127059   23379 main.go:141] libmachine: (ha-604935-m03)       <target port='0'/>
	I1202 11:47:55.127065   23379 main.go:141] libmachine: (ha-604935-m03)     </serial>
	I1202 11:47:55.127070   23379 main.go:141] libmachine: (ha-604935-m03)     <console type='pty'>
	I1202 11:47:55.127080   23379 main.go:141] libmachine: (ha-604935-m03)       <target type='serial' port='0'/>
	I1202 11:47:55.127089   23379 main.go:141] libmachine: (ha-604935-m03)     </console>
	I1202 11:47:55.127100   23379 main.go:141] libmachine: (ha-604935-m03)     <rng model='virtio'>
	I1202 11:47:55.127112   23379 main.go:141] libmachine: (ha-604935-m03)       <backend model='random'>/dev/random</backend>
	I1202 11:47:55.127125   23379 main.go:141] libmachine: (ha-604935-m03)     </rng>
	I1202 11:47:55.127130   23379 main.go:141] libmachine: (ha-604935-m03)     
	I1202 11:47:55.127136   23379 main.go:141] libmachine: (ha-604935-m03)     
	I1202 11:47:55.127141   23379 main.go:141] libmachine: (ha-604935-m03)   </devices>
	I1202 11:47:55.127147   23379 main.go:141] libmachine: (ha-604935-m03) </domain>
	I1202 11:47:55.127154   23379 main.go:141] libmachine: (ha-604935-m03) 
	I1202 11:47:55.134362   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:04:31:c3 in network default
	I1202 11:47:55.134940   23379 main.go:141] libmachine: (ha-604935-m03) Ensuring networks are active...
	I1202 11:47:55.134970   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:55.135700   23379 main.go:141] libmachine: (ha-604935-m03) Ensuring network default is active
	I1202 11:47:55.135994   23379 main.go:141] libmachine: (ha-604935-m03) Ensuring network mk-ha-604935 is active
	I1202 11:47:55.136395   23379 main.go:141] libmachine: (ha-604935-m03) Getting domain xml...
	I1202 11:47:55.137154   23379 main.go:141] libmachine: (ha-604935-m03) Creating domain...
	I1202 11:47:56.327343   23379 main.go:141] libmachine: (ha-604935-m03) Waiting to get IP...
	I1202 11:47:56.328051   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:56.328532   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:56.328560   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:56.328490   24139 retry.go:31] will retry after 245.534512ms: waiting for machine to come up
	I1202 11:47:56.575853   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:56.576344   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:56.576361   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:56.576322   24139 retry.go:31] will retry after 318.961959ms: waiting for machine to come up
	I1202 11:47:56.897058   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:56.897590   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:56.897617   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:56.897539   24139 retry.go:31] will retry after 408.54179ms: waiting for machine to come up
	I1202 11:47:57.308040   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:57.308434   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:57.308462   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:57.308386   24139 retry.go:31] will retry after 402.803745ms: waiting for machine to come up
	I1202 11:47:57.713046   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:57.713543   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:57.713570   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:57.713486   24139 retry.go:31] will retry after 579.226055ms: waiting for machine to come up
	I1202 11:47:58.294078   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:58.294470   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:58.294499   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:58.294431   24139 retry.go:31] will retry after 896.930274ms: waiting for machine to come up
	I1202 11:47:59.192283   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:59.192647   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:59.192676   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:59.192594   24139 retry.go:31] will retry after 885.008169ms: waiting for machine to come up
	I1202 11:48:00.078944   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:00.079402   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:00.079429   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:00.079369   24139 retry.go:31] will retry after 1.252859053s: waiting for machine to come up
	I1202 11:48:01.333237   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:01.333651   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:01.333686   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:01.333595   24139 retry.go:31] will retry after 1.614324315s: waiting for machine to come up
	I1202 11:48:02.949128   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:02.949536   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:02.949565   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:02.949508   24139 retry.go:31] will retry after 1.812710836s: waiting for machine to come up
	I1202 11:48:04.763946   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:04.764375   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:04.764406   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:04.764323   24139 retry.go:31] will retry after 2.067204627s: waiting for machine to come up
	I1202 11:48:06.833288   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:06.833665   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:06.833688   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:06.833637   24139 retry.go:31] will retry after 2.307525128s: waiting for machine to come up
	I1202 11:48:09.144169   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:09.144572   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:09.144593   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:09.144528   24139 retry.go:31] will retry after 3.498536479s: waiting for machine to come up
	I1202 11:48:12.646257   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:12.646634   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:12.646662   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:12.646585   24139 retry.go:31] will retry after 4.180840958s: waiting for machine to come up
	I1202 11:48:16.830266   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.830741   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has current primary IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.830768   23379 main.go:141] libmachine: (ha-604935-m03) Found IP for machine: 192.168.39.211
	I1202 11:48:16.830807   23379 main.go:141] libmachine: (ha-604935-m03) Reserving static IP address...
	I1202 11:48:16.831141   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find host DHCP lease matching {name: "ha-604935-m03", mac: "52:54:00:56:c4:59", ip: "192.168.39.211"} in network mk-ha-604935
	I1202 11:48:16.902131   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Getting to WaitForSSH function...
	I1202 11:48:16.902164   23379 main.go:141] libmachine: (ha-604935-m03) Reserved static IP address: 192.168.39.211
	I1202 11:48:16.902173   23379 main.go:141] libmachine: (ha-604935-m03) Waiting for SSH to be available...
	I1202 11:48:16.905075   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.905526   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:16.905551   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.905741   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Using SSH client type: external
	I1202 11:48:16.905772   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa (-rw-------)
	I1202 11:48:16.905800   23379 main.go:141] libmachine: (ha-604935-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:48:16.905820   23379 main.go:141] libmachine: (ha-604935-m03) DBG | About to run SSH command:
	I1202 11:48:16.905851   23379 main.go:141] libmachine: (ha-604935-m03) DBG | exit 0
	I1202 11:48:17.032533   23379 main.go:141] libmachine: (ha-604935-m03) DBG | SSH cmd err, output: <nil>: 
	I1202 11:48:17.032776   23379 main.go:141] libmachine: (ha-604935-m03) KVM machine creation complete!
	I1202 11:48:17.033131   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetConfigRaw
	I1202 11:48:17.033671   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:17.033865   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:17.034018   23379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:48:17.034033   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetState
	I1202 11:48:17.035293   23379 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:48:17.035305   23379 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:48:17.035310   23379 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:48:17.035315   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.037352   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.037741   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.037774   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.037900   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.038083   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.038238   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.038381   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.038530   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.038713   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.038724   23379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:48:17.143327   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:48:17.143352   23379 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:48:17.143372   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.146175   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.146516   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.146548   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.146646   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.146838   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.146983   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.147108   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.147258   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.147425   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.147438   23379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:48:17.253131   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:48:17.253218   23379 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:48:17.253233   23379 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:48:17.253245   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:48:17.253510   23379 buildroot.go:166] provisioning hostname "ha-604935-m03"
	I1202 11:48:17.253537   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:48:17.253707   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.256428   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.256774   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.256796   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.256946   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.257116   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.257249   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.257377   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.257504   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.257691   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.257703   23379 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935-m03 && echo "ha-604935-m03" | sudo tee /etc/hostname
	I1202 11:48:17.375185   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935-m03
	
	I1202 11:48:17.375210   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.377667   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.378038   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.378062   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.378264   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.378483   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.378634   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.378780   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.378929   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.379106   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.379136   23379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:48:17.496248   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:48:17.496279   23379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:48:17.496297   23379 buildroot.go:174] setting up certificates
	I1202 11:48:17.496309   23379 provision.go:84] configureAuth start
	I1202 11:48:17.496322   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:48:17.496560   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:17.499486   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.499912   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.499947   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.500094   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.502337   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.502712   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.502737   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.502856   23379 provision.go:143] copyHostCerts
	I1202 11:48:17.502886   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:48:17.502931   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:48:17.502944   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:48:17.503023   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:48:17.503097   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:48:17.503116   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:48:17.503123   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:48:17.503148   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:48:17.503191   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:48:17.503207   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:48:17.503214   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:48:17.503234   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:48:17.503299   23379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935-m03 san=[127.0.0.1 192.168.39.211 ha-604935-m03 localhost minikube]
	I1202 11:48:17.587852   23379 provision.go:177] copyRemoteCerts
	I1202 11:48:17.587906   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:48:17.587927   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.590598   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.590995   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.591015   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.591197   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.591367   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.591543   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.591679   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:17.674221   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:48:17.674296   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:48:17.698597   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:48:17.698660   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:48:17.723039   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:48:17.723097   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:48:17.747396   23379 provision.go:87] duration metric: took 251.076751ms to configureAuth
	I1202 11:48:17.747416   23379 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:48:17.747635   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:48:17.747715   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.750670   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.751052   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.751081   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.751262   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.751452   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.751599   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.751748   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.751905   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.752098   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.752117   23379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:48:17.976945   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:48:17.976975   23379 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:48:17.976987   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetURL
	I1202 11:48:17.978227   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Using libvirt version 6000000
	I1202 11:48:17.980581   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.980959   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.980987   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.981117   23379 main.go:141] libmachine: Docker is up and running!
	I1202 11:48:17.981135   23379 main.go:141] libmachine: Reticulating splines...
	I1202 11:48:17.981143   23379 client.go:171] duration metric: took 23.211756514s to LocalClient.Create
	I1202 11:48:17.981168   23379 start.go:167] duration metric: took 23.211833697s to libmachine.API.Create "ha-604935"
	I1202 11:48:17.981181   23379 start.go:293] postStartSetup for "ha-604935-m03" (driver="kvm2")
	I1202 11:48:17.981196   23379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:48:17.981223   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:17.981429   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:48:17.981453   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.983470   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.983816   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.983841   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.983966   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.984144   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.984312   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.984449   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:18.067334   23379 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:48:18.072037   23379 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:48:18.072060   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:48:18.072140   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:48:18.072226   23379 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:48:18.072251   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:48:18.072352   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:48:18.083182   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:48:18.110045   23379 start.go:296] duration metric: took 128.848906ms for postStartSetup
	I1202 11:48:18.110090   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetConfigRaw
	I1202 11:48:18.110693   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:18.113273   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.113636   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.113656   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.113891   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:48:18.114175   23379 start.go:128] duration metric: took 23.363096022s to createHost
	I1202 11:48:18.114201   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:18.116660   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.116982   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.117010   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.117166   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:18.117378   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.117545   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.117689   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:18.117845   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:18.118040   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:18.118051   23379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:48:18.225174   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733140098.198364061
	
	I1202 11:48:18.225197   23379 fix.go:216] guest clock: 1733140098.198364061
	I1202 11:48:18.225206   23379 fix.go:229] Guest: 2024-12-02 11:48:18.198364061 +0000 UTC Remote: 2024-12-02 11:48:18.114189112 +0000 UTC m=+146.672947053 (delta=84.174949ms)
	I1202 11:48:18.225226   23379 fix.go:200] guest clock delta is within tolerance: 84.174949ms
	I1202 11:48:18.225232   23379 start.go:83] releasing machines lock for "ha-604935-m03", held for 23.474299783s
	I1202 11:48:18.225255   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.225523   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:18.228223   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.228665   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.228698   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.231057   23379 out.go:177] * Found network options:
	I1202 11:48:18.232381   23379 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.96
	W1202 11:48:18.233581   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	W1202 11:48:18.233602   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:48:18.233614   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.234079   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.234244   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.234317   23379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:48:18.234369   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	W1202 11:48:18.234421   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	W1202 11:48:18.234435   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:48:18.234477   23379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:48:18.234492   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:18.237268   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.237547   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.237709   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.237734   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.237883   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:18.237989   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.238016   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.238057   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.238152   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:18.238220   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:18.238300   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.238378   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:18.238455   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:18.238579   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:18.473317   23379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:48:18.479920   23379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:48:18.479984   23379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:48:18.496983   23379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:48:18.497001   23379 start.go:495] detecting cgroup driver to use...
	I1202 11:48:18.497065   23379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:48:18.513241   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:48:18.527410   23379 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:48:18.527466   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:48:18.541725   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:48:18.557008   23379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:48:18.688718   23379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:48:18.852643   23379 docker.go:233] disabling docker service ...
	I1202 11:48:18.852707   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:48:18.868163   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:48:18.881925   23379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:48:19.017240   23379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:48:19.151423   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:48:19.165081   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:48:19.183322   23379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:48:19.183382   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.193996   23379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:48:19.194053   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.204159   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.214125   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.224009   23379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:48:19.234581   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.244825   23379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.261368   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.270942   23379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:48:19.279793   23379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:48:19.279828   23379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:48:19.292711   23379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:48:19.302043   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:48:19.426581   23379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:48:19.517813   23379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:48:19.517869   23379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:48:19.523046   23379 start.go:563] Will wait 60s for crictl version
	I1202 11:48:19.523100   23379 ssh_runner.go:195] Run: which crictl
	I1202 11:48:19.526693   23379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:48:19.569077   23379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:48:19.569154   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:48:19.606184   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:48:19.639221   23379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:48:19.640557   23379 out.go:177]   - env NO_PROXY=192.168.39.102
	I1202 11:48:19.641750   23379 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.96
	I1202 11:48:19.642878   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:19.645504   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:19.645963   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:19.645990   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:19.646180   23379 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:48:19.650508   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:48:19.664882   23379 mustload.go:65] Loading cluster: ha-604935
	I1202 11:48:19.665139   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:48:19.665497   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:48:19.665538   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:48:19.680437   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1202 11:48:19.680830   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:48:19.681262   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:48:19.681286   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:48:19.681575   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:48:19.681746   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:48:19.683191   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:48:19.683564   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:48:19.683606   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:48:19.697831   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I1202 11:48:19.698152   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:48:19.698542   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:48:19.698559   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:48:19.698845   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:48:19.699001   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:48:19.699166   23379 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.211
	I1202 11:48:19.699179   23379 certs.go:194] generating shared ca certs ...
	I1202 11:48:19.699197   23379 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:48:19.699318   23379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:48:19.699355   23379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:48:19.699364   23379 certs.go:256] generating profile certs ...
	I1202 11:48:19.699432   23379 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:48:19.699455   23379 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864
	I1202 11:48:19.699468   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.96 192.168.39.211 192.168.39.254]
	I1202 11:48:19.775540   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864 ...
	I1202 11:48:19.775561   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864: {Name:mk862a073739ee2a78cf9f81a3258f4be6a2f692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:48:19.775718   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864 ...
	I1202 11:48:19.775732   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864: {Name:mk2b946b8deaf42e144aacb0aeac107c1e5e5346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:48:19.775826   23379 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:48:19.775947   23379 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:48:19.776063   23379 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:48:19.776077   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:48:19.776089   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:48:19.776102   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:48:19.776114   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:48:19.776131   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:48:19.776145   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:48:19.776157   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:48:19.800328   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:48:19.800402   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:48:19.800434   23379 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:48:19.800443   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:48:19.800467   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:48:19.800488   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:48:19.800508   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:48:19.800550   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:48:19.800576   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:19.800589   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:48:19.800601   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:48:19.800629   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:48:19.803275   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:19.803700   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:48:19.803723   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:19.803908   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:48:19.804099   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:48:19.804214   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:48:19.804377   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:48:19.880485   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 11:48:19.886022   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 11:48:19.898728   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 11:48:19.903305   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 11:48:19.914871   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 11:48:19.919141   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 11:48:19.929566   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 11:48:19.933478   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1202 11:48:19.943613   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 11:48:19.948089   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 11:48:19.958895   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 11:48:19.964303   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 11:48:19.977617   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:48:20.002994   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:48:20.029806   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:48:20.053441   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:48:20.076846   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1202 11:48:20.100859   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 11:48:20.123816   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:48:20.147882   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:48:20.170789   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:48:20.194677   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:48:20.217677   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:48:20.242059   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 11:48:20.259613   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 11:48:20.277187   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 11:48:20.294496   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1202 11:48:20.311183   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 11:48:20.328629   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 11:48:20.347609   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 11:48:20.365780   23379 ssh_runner.go:195] Run: openssl version
	I1202 11:48:20.371782   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:48:20.383879   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:20.388524   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:20.388568   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:20.394674   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:48:20.407273   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:48:20.419450   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:48:20.424025   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:48:20.424067   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:48:20.429730   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:48:20.440110   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:48:20.451047   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:48:20.456468   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:48:20.456512   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:48:20.462924   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:48:20.474358   23379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:48:20.478447   23379 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:48:20.478499   23379 kubeadm.go:934] updating node {m03 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1202 11:48:20.478603   23379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:48:20.478639   23379 kube-vip.go:115] generating kube-vip config ...
	I1202 11:48:20.478678   23379 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:48:20.496205   23379 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:48:20.496274   23379 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:48:20.496312   23379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:48:20.507618   23379 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1202 11:48:20.507658   23379 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1202 11:48:20.517119   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1202 11:48:20.517130   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1202 11:48:20.517161   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:48:20.517164   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:48:20.517126   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1202 11:48:20.517219   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:48:20.517234   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:48:20.517303   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:48:20.534132   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:48:20.534202   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:48:20.534220   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1202 11:48:20.534247   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1202 11:48:20.534296   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1202 11:48:20.534330   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1202 11:48:20.553870   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1202 11:48:20.553896   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1202 11:48:21.369626   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 11:48:21.380201   23379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1202 11:48:21.397686   23379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:48:21.414134   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1202 11:48:21.430962   23379 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:48:21.434795   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:48:21.446707   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:48:21.575648   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:48:21.592190   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:48:21.592653   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:48:21.592702   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:48:21.607602   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I1202 11:48:21.608034   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:48:21.608505   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:48:21.608523   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:48:21.608871   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:48:21.609064   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:48:21.609215   23379 start.go:317] joinCluster: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:48:21.609330   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1202 11:48:21.609352   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:48:21.612246   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:21.612678   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:48:21.612705   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:21.612919   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:48:21.613101   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:48:21.613260   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:48:21.613431   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:48:21.802258   23379 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:48:21.802311   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oi1g5f.7vg9nzzhmrri7fzl --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443"
	I1202 11:48:44.058534   23379 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oi1g5f.7vg9nzzhmrri7fzl --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443": (22.25619815s)
	I1202 11:48:44.058574   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1202 11:48:44.589392   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-604935-m03 minikube.k8s.io/updated_at=2024_12_02T11_48_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=ha-604935 minikube.k8s.io/primary=false
	I1202 11:48:44.754182   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-604935-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1202 11:48:44.876509   23379 start.go:319] duration metric: took 23.267291972s to joinCluster
	I1202 11:48:44.876583   23379 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:48:44.876929   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:48:44.877896   23379 out.go:177] * Verifying Kubernetes components...
	I1202 11:48:44.879178   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:48:45.205771   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:48:45.227079   23379 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:48:45.227379   23379 kapi.go:59] client config for ha-604935: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 11:48:45.227437   23379 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1202 11:48:45.227646   23379 node_ready.go:35] waiting up to 6m0s for node "ha-604935-m03" to be "Ready" ...
	I1202 11:48:45.227731   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:45.227739   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:45.227750   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:45.227760   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:45.230602   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:45.728816   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:45.728844   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:45.728856   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:45.728862   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:45.732325   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:46.228808   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:46.228838   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:46.228847   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:46.228855   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:46.232971   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:48:46.728246   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:46.728266   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:46.728275   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:46.728278   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:46.731578   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:47.228275   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:47.228293   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:47.228302   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:47.228305   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:47.231235   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:47.231687   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:47.728543   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:47.728564   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:47.728575   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:47.728580   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:47.731725   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:48.228100   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:48.228126   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:48.228134   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:48.228139   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:48.231200   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:48.727927   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:48.727953   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:48.727965   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:48.727971   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:48.731841   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:49.228251   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:49.228277   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:49.228288   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:49.228295   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:49.231887   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:49.232816   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:49.728539   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:49.728558   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:49.728567   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:49.728578   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:49.731618   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:50.228164   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:50.228182   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:50.228190   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:50.228194   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:50.231677   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:50.728841   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:50.728865   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:50.728877   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:50.728884   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:50.731790   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:51.227844   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:51.227875   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:51.227882   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:51.227886   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:51.231092   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:51.728369   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:51.728389   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:51.728397   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:51.728402   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:51.731512   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:51.732161   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:52.228555   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:52.228577   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:52.228585   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:52.228590   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:52.232624   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:48:52.727915   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:52.727935   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:52.727942   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:52.727946   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:52.731213   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:53.228361   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:53.228382   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:53.228389   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:53.228392   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:53.233382   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:48:53.728248   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:53.728268   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:53.728276   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:53.728280   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:53.731032   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:54.228383   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:54.228402   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:54.228409   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:54.228414   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:54.231567   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:54.232182   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:54.728033   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:54.728054   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:54.728070   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:54.728078   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:54.731003   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:55.227931   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:55.227952   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:55.227959   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:55.227963   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:55.231124   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:55.728257   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:55.728282   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:55.728295   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:55.728302   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:55.731469   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:56.228616   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:56.228634   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:56.228642   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:56.228648   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:56.231749   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:56.232413   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:56.728627   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:56.728662   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:56.728672   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:56.728679   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:56.731199   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:57.228073   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:57.228095   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:57.228106   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:57.228112   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:57.231071   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:57.728355   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:57.728374   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:57.728386   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:57.728390   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:57.732053   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:58.228692   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:58.228716   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:58.228725   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:58.228731   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:58.231871   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:58.232534   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:58.727842   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:58.727867   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:58.727888   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:58.727893   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:58.730412   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:59.228495   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:59.228515   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:59.228522   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:59.228525   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:59.232497   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:59.728247   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:59.728264   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:59.728272   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:59.728275   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:59.731212   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.227900   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:00.227922   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.227929   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.227932   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.232057   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:00.233141   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:49:00.728080   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:00.728104   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.728116   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.728123   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.730928   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.731736   23379 node_ready.go:49] node "ha-604935-m03" has status "Ready":"True"
	I1202 11:49:00.731754   23379 node_ready.go:38] duration metric: took 15.50409308s for node "ha-604935-m03" to be "Ready" ...
	I1202 11:49:00.731762   23379 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:49:00.731812   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:00.731821   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.731828   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.731833   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.737119   23379 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:49:00.743811   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.743881   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5gcc2
	I1202 11:49:00.743889   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.743896   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.743900   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.746447   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.747270   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:00.747288   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.747298   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.747304   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.750173   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.750663   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.750685   23379 pod_ready.go:82] duration metric: took 6.851528ms for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.750697   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.750762   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-g48q9
	I1202 11:49:00.750773   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.750782   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.750787   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.753393   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.754225   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:00.754242   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.754253   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.754261   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.756959   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.757348   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.757363   23379 pod_ready.go:82] duration metric: took 6.658502ms for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.757372   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.757427   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935
	I1202 11:49:00.757438   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.757444   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.757449   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.759919   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.760524   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:00.760540   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.760551   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.760557   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.762639   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.763103   23379 pod_ready.go:93] pod "etcd-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.763117   23379 pod_ready.go:82] duration metric: took 5.738836ms for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.763130   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.763170   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:49:00.763178   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.763184   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.763187   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.765295   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.765840   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:00.765853   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.765859   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.765866   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.767856   23379 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:49:00.768294   23379 pod_ready.go:93] pod "etcd-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.768308   23379 pod_ready.go:82] duration metric: took 5.173078ms for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.768315   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.928568   23379 request.go:632] Waited for 160.204775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m03
	I1202 11:49:00.928622   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m03
	I1202 11:49:00.928630   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.928637   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.928644   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.931639   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:01.129121   23379 request.go:632] Waited for 196.362858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:01.129188   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:01.129194   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.129201   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.129206   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.132093   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:01.132639   23379 pod_ready.go:93] pod "etcd-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:01.132663   23379 pod_ready.go:82] duration metric: took 364.340751ms for pod "etcd-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.132685   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.328581   23379 request.go:632] Waited for 195.818618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935
	I1202 11:49:01.328640   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935
	I1202 11:49:01.328645   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.328651   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.328659   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.332129   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:01.528887   23379 request.go:632] Waited for 196.197458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:01.528960   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:01.528968   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.528983   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.528991   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.531764   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:01.532366   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:01.532385   23379 pod_ready.go:82] duration metric: took 399.689084ms for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.532395   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.729145   23379 request.go:632] Waited for 196.686289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:49:01.729214   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:49:01.729222   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.729232   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.729241   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.732550   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:01.928940   23379 request.go:632] Waited for 195.375728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:01.929027   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:01.929039   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.929049   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.929060   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.932849   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:01.933394   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:01.933415   23379 pod_ready.go:82] duration metric: took 401.013286ms for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.933428   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.128618   23379 request.go:632] Waited for 195.115216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m03
	I1202 11:49:02.128692   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m03
	I1202 11:49:02.128704   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.128714   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.128744   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.132085   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:02.328195   23379 request.go:632] Waited for 195.287157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:02.328272   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:02.328280   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.328290   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.328294   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.331350   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:02.332062   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:02.332086   23379 pod_ready.go:82] duration metric: took 398.648799ms for pod "kube-apiserver-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.332096   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.528402   23379 request.go:632] Waited for 196.237056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:49:02.528456   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:49:02.528461   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.528468   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.528471   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.531001   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:02.729030   23379 request.go:632] Waited for 197.344265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:02.729083   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:02.729088   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.729095   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.729101   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.733927   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:02.734415   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:02.734433   23379 pod_ready.go:82] duration metric: took 402.330362ms for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.734442   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.928547   23379 request.go:632] Waited for 194.020533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:49:02.928615   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:49:02.928624   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.928634   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.928644   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.933547   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:03.128827   23379 request.go:632] Waited for 194.344486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:03.128890   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:03.128895   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.128915   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.128921   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.133610   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:03.134316   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:03.134333   23379 pod_ready.go:82] duration metric: took 399.884969ms for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.134345   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.328421   23379 request.go:632] Waited for 194.000988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m03
	I1202 11:49:03.328488   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m03
	I1202 11:49:03.328493   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.328500   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.328505   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.331240   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:03.528448   23379 request.go:632] Waited for 196.353439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.528524   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.528532   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.528542   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.528554   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.532267   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:03.532704   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:03.532722   23379 pod_ready.go:82] duration metric: took 398.368333ms for pod "kube-controller-manager-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.532747   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rp7t2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.728896   23379 request.go:632] Waited for 196.080235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rp7t2
	I1202 11:49:03.728966   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rp7t2
	I1202 11:49:03.728972   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.728979   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.728982   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.732009   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:03.929024   23379 request.go:632] Waited for 196.282412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.929090   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.929096   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.929106   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.929111   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.932496   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:03.933154   23379 pod_ready.go:93] pod "kube-proxy-rp7t2" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:03.933174   23379 pod_ready.go:82] duration metric: took 400.416355ms for pod "kube-proxy-rp7t2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.933184   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.128132   23379 request.go:632] Waited for 194.87576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:49:04.128183   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:49:04.128188   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.128196   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.128200   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.131316   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:04.328392   23379 request.go:632] Waited for 196.344562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:04.328464   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:04.328472   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.328488   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.328504   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.331622   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:04.332330   23379 pod_ready.go:93] pod "kube-proxy-tqcb6" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:04.332349   23379 pod_ready.go:82] duration metric: took 399.158434ms for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.332362   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.528404   23379 request.go:632] Waited for 195.973025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:49:04.528476   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:49:04.528485   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.528499   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.528512   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.531287   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:04.728831   23379 request.go:632] Waited for 196.723103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:04.728880   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:04.728888   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.728918   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.728926   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.731917   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:04.732716   23379 pod_ready.go:93] pod "kube-proxy-w9r4x" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:04.732733   23379 pod_ready.go:82] duration metric: took 400.363929ms for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.732741   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.928126   23379 request.go:632] Waited for 195.328391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:49:04.928208   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:49:04.928219   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.928242   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.928251   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.931908   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.129033   23379 request.go:632] Waited for 196.165096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:05.129107   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:05.129114   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.129124   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.129131   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.132837   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.133502   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:05.133521   23379 pod_ready.go:82] duration metric: took 400.774358ms for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.133531   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.328705   23379 request.go:632] Waited for 195.110801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:49:05.328775   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:49:05.328782   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.328792   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.328804   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.332423   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.528425   23379 request.go:632] Waited for 195.360611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:05.528479   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:05.528484   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.528491   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.528494   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.531378   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:05.531939   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:05.531957   23379 pod_ready.go:82] duration metric: took 398.419577ms for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.531967   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.728987   23379 request.go:632] Waited for 196.947438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m03
	I1202 11:49:05.729040   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m03
	I1202 11:49:05.729045   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.729052   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.729056   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.732940   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.928937   23379 request.go:632] Waited for 195.348906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:05.928990   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:05.928996   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.929007   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.929023   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.932936   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.933995   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:05.934013   23379 pod_ready.go:82] duration metric: took 402.03942ms for pod "kube-scheduler-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.934028   23379 pod_ready.go:39] duration metric: took 5.202257007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:49:05.934044   23379 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:49:05.934111   23379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:49:05.950308   23379 api_server.go:72] duration metric: took 21.073692026s to wait for apiserver process to appear ...
	I1202 11:49:05.950330   23379 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:49:05.950350   23379 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1202 11:49:05.954392   23379 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1202 11:49:05.954463   23379 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1202 11:49:05.954472   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.954479   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.954484   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.955264   23379 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1202 11:49:05.955324   23379 api_server.go:141] control plane version: v1.31.2
	I1202 11:49:05.955340   23379 api_server.go:131] duration metric: took 5.002951ms to wait for apiserver health ...
	I1202 11:49:05.955348   23379 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:49:06.128765   23379 request.go:632] Waited for 173.340291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.128831   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.128854   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.128868   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.128878   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.134738   23379 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:49:06.141415   23379 system_pods.go:59] 24 kube-system pods found
	I1202 11:49:06.141437   23379 system_pods.go:61] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:49:06.141442   23379 system_pods.go:61] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:49:06.141446   23379 system_pods.go:61] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:49:06.141449   23379 system_pods.go:61] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:49:06.141453   23379 system_pods.go:61] "etcd-ha-604935-m03" [2de6c192-755f-43c7-a973-b1137b03c49f] Running
	I1202 11:49:06.141457   23379 system_pods.go:61] "kindnet-j4cr6" [07287f32-1272-4735-bb43-88f862b28657] Running
	I1202 11:49:06.141461   23379 system_pods.go:61] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:49:06.141464   23379 system_pods.go:61] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:49:06.141468   23379 system_pods.go:61] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:49:06.141471   23379 system_pods.go:61] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:49:06.141475   23379 system_pods.go:61] "kube-apiserver-ha-604935-m03" [74b078f5-560f-4077-be17-91f7add9545f] Running
	I1202 11:49:06.141479   23379 system_pods.go:61] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:49:06.141487   23379 system_pods.go:61] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:49:06.141494   23379 system_pods.go:61] "kube-controller-manager-ha-604935-m03" [445254dd-244a-4f40-9a0c-362bd03686c3] Running
	I1202 11:49:06.141507   23379 system_pods.go:61] "kube-proxy-rp7t2" [84b2dba2-d1be-49b6-addc-a9d919ef683e] Running
	I1202 11:49:06.141512   23379 system_pods.go:61] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:49:06.141517   23379 system_pods.go:61] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:49:06.141523   23379 system_pods.go:61] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:49:06.141527   23379 system_pods.go:61] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:49:06.141531   23379 system_pods.go:61] "kube-scheduler-ha-604935-m03" [45cc93ef-1da2-469b-a0de-8bc9b8383094] Running
	I1202 11:49:06.141534   23379 system_pods.go:61] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:49:06.141540   23379 system_pods.go:61] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:49:06.141543   23379 system_pods.go:61] "kube-vip-ha-604935-m03" [5c5c4e09-5ad1-4b08-8ea3-84260528b78e] Running
	I1202 11:49:06.141545   23379 system_pods.go:61] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:49:06.141551   23379 system_pods.go:74] duration metric: took 186.197102ms to wait for pod list to return data ...
	I1202 11:49:06.141560   23379 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:49:06.329008   23379 request.go:632] Waited for 187.367529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:49:06.329100   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:49:06.329113   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.329125   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.329130   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.332755   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:06.332967   23379 default_sa.go:45] found service account: "default"
	I1202 11:49:06.332983   23379 default_sa.go:55] duration metric: took 191.417488ms for default service account to be created ...
	I1202 11:49:06.332991   23379 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:49:06.528293   23379 request.go:632] Waited for 195.242273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.528366   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.528375   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.528382   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.528388   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.533257   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:06.539940   23379 system_pods.go:86] 24 kube-system pods found
	I1202 11:49:06.539965   23379 system_pods.go:89] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:49:06.539970   23379 system_pods.go:89] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:49:06.539976   23379 system_pods.go:89] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:49:06.539980   23379 system_pods.go:89] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:49:06.539983   23379 system_pods.go:89] "etcd-ha-604935-m03" [2de6c192-755f-43c7-a973-b1137b03c49f] Running
	I1202 11:49:06.539986   23379 system_pods.go:89] "kindnet-j4cr6" [07287f32-1272-4735-bb43-88f862b28657] Running
	I1202 11:49:06.539989   23379 system_pods.go:89] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:49:06.539995   23379 system_pods.go:89] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:49:06.539998   23379 system_pods.go:89] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:49:06.540002   23379 system_pods.go:89] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:49:06.540006   23379 system_pods.go:89] "kube-apiserver-ha-604935-m03" [74b078f5-560f-4077-be17-91f7add9545f] Running
	I1202 11:49:06.540009   23379 system_pods.go:89] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:49:06.540013   23379 system_pods.go:89] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:49:06.540016   23379 system_pods.go:89] "kube-controller-manager-ha-604935-m03" [445254dd-244a-4f40-9a0c-362bd03686c3] Running
	I1202 11:49:06.540020   23379 system_pods.go:89] "kube-proxy-rp7t2" [84b2dba2-d1be-49b6-addc-a9d919ef683e] Running
	I1202 11:49:06.540024   23379 system_pods.go:89] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:49:06.540028   23379 system_pods.go:89] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:49:06.540034   23379 system_pods.go:89] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:49:06.540037   23379 system_pods.go:89] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:49:06.540040   23379 system_pods.go:89] "kube-scheduler-ha-604935-m03" [45cc93ef-1da2-469b-a0de-8bc9b8383094] Running
	I1202 11:49:06.540043   23379 system_pods.go:89] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:49:06.540046   23379 system_pods.go:89] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:49:06.540049   23379 system_pods.go:89] "kube-vip-ha-604935-m03" [5c5c4e09-5ad1-4b08-8ea3-84260528b78e] Running
	I1202 11:49:06.540053   23379 system_pods.go:89] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:49:06.540058   23379 system_pods.go:126] duration metric: took 207.062281ms to wait for k8s-apps to be running ...
	I1202 11:49:06.540068   23379 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:49:06.540106   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:49:06.555319   23379 system_svc.go:56] duration metric: took 15.24289ms WaitForService to wait for kubelet
	I1202 11:49:06.555341   23379 kubeadm.go:582] duration metric: took 21.678727669s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:49:06.555356   23379 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:49:06.728222   23379 request.go:632] Waited for 172.787542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1202 11:49:06.728311   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1202 11:49:06.728317   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.728327   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.728332   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.731784   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:06.733040   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:49:06.733062   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:49:06.733074   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:49:06.733079   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:49:06.733084   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:49:06.733088   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:49:06.733094   23379 node_conditions.go:105] duration metric: took 177.727321ms to run NodePressure ...
	I1202 11:49:06.733107   23379 start.go:241] waiting for startup goroutines ...
	I1202 11:49:06.733138   23379 start.go:255] writing updated cluster config ...
	I1202 11:49:06.733452   23379 ssh_runner.go:195] Run: rm -f paused
	I1202 11:49:06.787558   23379 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 11:49:06.789249   23379 out.go:177] * Done! kubectl is now configured to use "ha-604935" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.938562146Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e98dc28-8818-4283-97ff-83bc01f615c1 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.940146414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fe61c54-9aea-45c1-8ac9-b4a00d76c6a5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.940768229Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140369940743337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fe61c54-9aea-45c1-8ac9-b4a00d76c6a5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.948031163Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ceab7da-3b87-4518-9810-9398bf5fe656 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.948103165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ceab7da-3b87-4518-9810-9398bf5fe656 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.948303717Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ceab7da-3b87-4518-9810-9398bf5fe656 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.987821447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7b103d2-8f53-414c-8e50-5c873434c6c9 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.987890417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7b103d2-8f53-414c-8e50-5c873434c6c9 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.988853822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=494c3a74-6396-4dd1-940b-8f108862e9bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.989262615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140369989241615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=494c3a74-6396-4dd1-940b-8f108862e9bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.989760497Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69d1f68f-f436-4e42-895f-45c92bbb7a8f name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.989810340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69d1f68f-f436-4e42-895f-45c92bbb7a8f name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:49 ha-604935 crio[658]: time="2024-12-02 11:52:49.990012618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69d1f68f-f436-4e42-895f-45c92bbb7a8f name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:50 ha-604935 crio[658]: time="2024-12-02 11:52:50.012237915Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c9933953-4925-428c-aa4e-afa0c3d03999 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 11:52:50 ha-604935 crio[658]: time="2024-12-02 11:52:50.012639133Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-8jxc4,Uid:f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733140148279592187,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:49:07.666936685Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1023dda9-1199-4200-9b82-bb054a0eedff,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1733140013381225285,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-02T11:46:53.065981152Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-g48q9,Uid:66ce87a9-4918-45fd-9721-d4e6323b7b54,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733140013379375022,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:46:53.065488407Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-5gcc2,Uid:63fea190-8001-4264-a579-13a9cae6ddff,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1733140013372020076,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63fea190-8001-4264-a579-13a9cae6ddff,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:46:53.058488150Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&PodSandboxMetadata{Name:kindnet-k99r8,Uid:e5466844-1f48-46c2-8e34-c4bf016b9656,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139999159079477,Labels:map[string]string{app: kindnet,controller-revision-hash: 65ddb8b87b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:46:38.840314062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&PodSandboxMetadata{Name:kube-proxy-tqcb6,Uid:d576fbb5-bee1-4482-82f5-b21a5e1e65f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139999157955919,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:46:38.836053895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-604935,Uid:3795b7eb129e1555193fc4481f415c61,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1733139987835770182,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3795b7eb129e1555193fc4481f415c61,kubernetes.io/config.seen: 2024-12-02T11:46:27.334541833Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-604935,Uid:e34a31690bf4b94086a296305429f2bd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139987829372109,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{kubernetes.io/config.hash: e34a
31690bf4b94086a296305429f2bd,kubernetes.io/config.seen: 2024-12-02T11:46:27.334542605Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-604935,Uid:1298b086a2bd0a1c4a6a3d5c72224eab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139987825890188,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.102:8443,kubernetes.io/config.hash: 1298b086a2bd0a1c4a6a3d5c72224eab,kubernetes.io/config.seen: 2024-12-02T11:46:27.334538959Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Met
adata:&PodSandboxMetadata{Name:etcd-ha-604935,Uid:7e46709c5369afc1ad72a60c327e7e03,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139987807865871,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.102:2379,kubernetes.io/config.hash: 7e46709c5369afc1ad72a60c327e7e03,kubernetes.io/config.seen: 2024-12-02T11:46:27.334535639Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-604935,Uid:367ab693a9f84a18356ae64542b127be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139987806690295,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 367ab693a9f84a18356ae64542b127be,kubernetes.io/config.seen: 2024-12-02T11:46:27.334540819Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c9933953-4925-428c-aa4e-afa0c3d03999 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 11:52:50 ha-604935 crio[658]: time="2024-12-02 11:52:50.013330486Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e17d7dd-d91c-4e4b-9042-af6e6062b80d name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:50 ha-604935 crio[658]: time="2024-12-02 11:52:50.013393936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e17d7dd-d91c-4e4b-9042-af6e6062b80d name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:50 ha-604935 crio[658]: time="2024-12-02 11:52:50.013723262Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e17d7dd-d91c-4e4b-9042-af6e6062b80d name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:50 ha-604935 crio[658]: time="2024-12-02 11:52:50.032145004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e201600-f626-4e39-89a4-a0a22ae7b905 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:50 ha-604935 crio[658]: time="2024-12-02 11:52:50.032199229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e201600-f626-4e39-89a4-a0a22ae7b905 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:50 ha-604935 crio[658]: time="2024-12-02 11:52:50.033681320Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ff4d1fc-7021-4f3f-b593-4b9fa7c9c4e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:50 ha-604935 crio[658]: time="2024-12-02 11:52:50.034110175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140370034090870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ff4d1fc-7021-4f3f-b593-4b9fa7c9c4e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:50 ha-604935 crio[658]: time="2024-12-02 11:52:50.034860825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e560a1ba-1a2a-413e-8afa-c9d1abe13616 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:50 ha-604935 crio[658]: time="2024-12-02 11:52:50.034930336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e560a1ba-1a2a-413e-8afa-c9d1abe13616 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:50 ha-604935 crio[658]: time="2024-12-02 11:52:50.035133970Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e560a1ba-1a2a-413e-8afa-c9d1abe13616 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	27068dc5178bb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   1f0c13e663748       busybox-7dff88458-8jxc4
	be0c4adffd61b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   72cc1a04d8965       coredns-7c65d6cfc9-g48q9
	91c90e9d05cf7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   abbb2caf2ff00       coredns-7c65d6cfc9-5gcc2
	9d7d77b59569b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   40752b9892351       storage-provisioner
	579b11920d9fd       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   646eade60f2d2       kindnet-k99r8
	f6a700874f779       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   8ba57f92e62cd       kube-proxy-tqcb6
	17bfa0393f187       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   096eb67e8b05d       kube-vip-ha-604935
	275d716cfd4f7       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   8978121739b66       kube-controller-manager-ha-604935
	090e4a0254277       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   1989811c4f393       kube-scheduler-ha-604935
	53184ed95349a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   ec95830bfe24d       etcd-ha-604935
	9624bba327f9b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   fc4151eee5a3f       kube-apiserver-ha-604935
	
	
	==> coredns [91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f] <==
	[INFO] 10.244.0.4:39323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215731s
	[INFO] 10.244.0.4:33525 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162613s
	[INFO] 10.244.0.4:39123 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125815s
	[INFO] 10.244.0.4:37376 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000244786s
	[INFO] 10.244.2.2:44210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174232s
	[INFO] 10.244.2.2:54748 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001765833s
	[INFO] 10.244.2.2:60174 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000284786s
	[INFO] 10.244.2.2:50584 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109022s
	[INFO] 10.244.2.2:34854 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001186229s
	[INFO] 10.244.2.2:42659 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081441s
	[INFO] 10.244.2.2:51018 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119851s
	[INFO] 10.244.1.2:51189 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001371264s
	[INFO] 10.244.1.2:57162 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158703s
	[INFO] 10.244.0.4:59693 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068002s
	[INFO] 10.244.0.4:51163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042042s
	[INFO] 10.244.2.2:40625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117188s
	[INFO] 10.244.1.2:49002 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091339s
	[INFO] 10.244.1.2:42507 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192925s
	[INFO] 10.244.0.4:36452 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215238s
	[INFO] 10.244.0.4:41389 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010969s
	[INFO] 10.244.2.2:55194 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000180309s
	[INFO] 10.244.2.2:45875 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109142s
	[INFO] 10.244.1.2:42301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164839s
	[INFO] 10.244.1.2:47133 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176562s
	[INFO] 10.244.1.2:42848 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122646s
	
	
	==> coredns [be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818] <==
	[INFO] 10.244.1.2:33047 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000108391s
	[INFO] 10.244.1.2:40927 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001980013s
	[INFO] 10.244.0.4:37566 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004168289s
	[INFO] 10.244.0.4:36737 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252503s
	[INFO] 10.244.0.4:33046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003375406s
	[INFO] 10.244.0.4:42598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177128s
	[INFO] 10.244.2.2:46358 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148802s
	[INFO] 10.244.1.2:55837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194128s
	[INFO] 10.244.1.2:55278 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002096061s
	[INFO] 10.244.1.2:45640 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141771s
	[INFO] 10.244.1.2:36834 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000204172s
	[INFO] 10.244.1.2:41503 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00026722s
	[INFO] 10.244.1.2:46043 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001413s
	[INFO] 10.244.0.4:37544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011909s
	[INFO] 10.244.0.4:58597 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007644s
	[INFO] 10.244.2.2:41510 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179912s
	[INFO] 10.244.2.2:41733 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013607s
	[INFO] 10.244.2.2:57759 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000205972s
	[INFO] 10.244.1.2:54620 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248357s
	[INFO] 10.244.1.2:40630 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109148s
	[INFO] 10.244.0.4:39309 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113844s
	[INFO] 10.244.0.4:42691 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170784s
	[INFO] 10.244.2.2:41138 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112783s
	[INFO] 10.244.2.2:32778 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073017s
	[INFO] 10.244.1.2:42298 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018329s
	
	
	==> describe nodes <==
	Name:               ha-604935
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T11_46_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:46:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:52:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-604935
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4653179aa8d04165a06718969a078842
	  System UUID:                4653179a-a8d0-4165-a067-18969a078842
	  Boot ID:                    059fb5e8-3774-458b-bfbf-8364817017d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8jxc4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 coredns-7c65d6cfc9-5gcc2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m12s
	  kube-system                 coredns-7c65d6cfc9-g48q9             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m12s
	  kube-system                 etcd-ha-604935                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m16s
	  kube-system                 kindnet-k99r8                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m12s
	  kube-system                 kube-apiserver-ha-604935             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-controller-manager-ha-604935    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-proxy-tqcb6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-scheduler-ha-604935             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-vip-ha-604935                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m10s                  kube-proxy       
	  Normal  Starting                 6m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m16s (x2 over 6m16s)  kubelet          Node ha-604935 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m16s (x2 over 6m16s)  kubelet          Node ha-604935 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m16s (x2 over 6m16s)  kubelet          Node ha-604935 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m13s                  node-controller  Node ha-604935 event: Registered Node ha-604935 in Controller
	  Normal  NodeReady                5m57s                  kubelet          Node ha-604935 status is now: NodeReady
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-604935 event: Registered Node ha-604935 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-604935 event: Registered Node ha-604935 in Controller
	
	
	Name:               ha-604935-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_47_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:47:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:50:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    ha-604935-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f21093f5748416fa30ea8181c31a3f7
	  System UUID:                0f21093f-5748-416f-a30e-a8181c31a3f7
	  Boot ID:                    5621b6a5-bb1a-408d-b692-10c4aad4b418
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xbb9t                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-604935-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m18s
	  kube-system                 kindnet-l55rq                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m20s
	  kube-system                 kube-apiserver-ha-604935-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-604935-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-proxy-w9r4x                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-scheduler-ha-604935-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-vip-ha-604935-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node ha-604935-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node ha-604935-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x7 over 5m20s)  kubelet          Node ha-604935-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-604935-m02 event: Registered Node ha-604935-m02 in Controller
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-604935-m02 event: Registered Node ha-604935-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-604935-m02 event: Registered Node ha-604935-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-604935-m02 status is now: NodeNotReady
	
	
	Name:               ha-604935-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_48_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:48:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:52:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:48:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:48:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:48:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:49:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    ha-604935-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8588450b38914bf3ac287b253d72fb4d
	  System UUID:                8588450b-3891-4bf3-ac28-7b253d72fb4d
	  Boot ID:                    735a98f4-21e5-4433-a99b-76bab3cbd392
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l5kq7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-604935-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m8s
	  kube-system                 kindnet-j4cr6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m10s
	  kube-system                 kube-apiserver-ha-604935-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-controller-manager-ha-604935-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-proxy-rp7t2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-scheduler-ha-604935-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-vip-ha-604935-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node ha-604935-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node ha-604935-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node ha-604935-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-604935-m03 event: Registered Node ha-604935-m03 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-604935-m03 event: Registered Node ha-604935-m03 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-604935-m03 event: Registered Node ha-604935-m03 in Controller
	
	
	Name:               ha-604935-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_49_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:49:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:52:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:49:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:49:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:49:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:50:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    ha-604935-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 577fefe5032840e68ccf6ba2b6fbcf44
	  System UUID:                577fefe5-0328-40e6-8ccf-6ba2b6fbcf44
	  Boot ID:                    5f3dbc6d-6884-49f4-acef-8235bb29f467
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwxsc       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m5s
	  kube-system                 kube-proxy-v649d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m6s)  kubelet          Node ha-604935-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m6s)  kubelet          Node ha-604935-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m6s)  kubelet          Node ha-604935-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m5s                 cidrAllocator    Node ha-604935-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-604935-m04 event: Registered Node ha-604935-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-604935-m04 event: Registered Node ha-604935-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-604935-m04 event: Registered Node ha-604935-m04 in Controller
	  Normal  NodeReady                2m46s                kubelet          Node ha-604935-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 2 11:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051551] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040036] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec 2 11:46] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.564296] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.579239] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.318373] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.060168] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057883] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.148672] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.135107] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.277991] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.959381] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +4.016173] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.058991] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.327237] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.069565] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.092272] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.163087] kauditd_printk_skb: 38 callbacks suppressed
	[Dec 2 11:47] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46] <==
	{"level":"warn","ts":"2024-12-02T11:52:50.302846Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.310563Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.315217Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.330356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.338525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.349639Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.353157Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.355999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.363774Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.368740Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.369629Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.375223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.379147Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.382251Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.387721Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.395776Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.410392Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.415497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.418041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.422095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.427848Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.438326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.447840Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.449619Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:50.468530Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:52:50 up 6 min,  0 users,  load average: 0.78, 0.44, 0.19
	Linux ha-604935 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10] <==
	I1202 11:52:12.903386       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	I1202 11:52:22.909600       1 main.go:297] Handling node with IPs: map[192.168.39.96:{}]
	I1202 11:52:22.909707       1 main.go:324] Node ha-604935-m02 has CIDR [10.244.1.0/24] 
	I1202 11:52:22.910146       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1202 11:52:22.910209       1 main.go:324] Node ha-604935-m03 has CIDR [10.244.2.0/24] 
	I1202 11:52:22.910752       1 main.go:297] Handling node with IPs: map[192.168.39.26:{}]
	I1202 11:52:22.910826       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	I1202 11:52:22.911166       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1202 11:52:22.911211       1 main.go:301] handling current node
	I1202 11:52:32.901182       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1202 11:52:32.901286       1 main.go:301] handling current node
	I1202 11:52:32.901341       1 main.go:297] Handling node with IPs: map[192.168.39.96:{}]
	I1202 11:52:32.901493       1 main.go:324] Node ha-604935-m02 has CIDR [10.244.1.0/24] 
	I1202 11:52:32.901812       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1202 11:52:32.901855       1 main.go:324] Node ha-604935-m03 has CIDR [10.244.2.0/24] 
	I1202 11:52:32.902073       1 main.go:297] Handling node with IPs: map[192.168.39.26:{}]
	I1202 11:52:32.903249       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	I1202 11:52:42.901238       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1202 11:52:42.901327       1 main.go:301] handling current node
	I1202 11:52:42.901361       1 main.go:297] Handling node with IPs: map[192.168.39.96:{}]
	I1202 11:52:42.901380       1 main.go:324] Node ha-604935-m02 has CIDR [10.244.1.0/24] 
	I1202 11:52:42.901720       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1202 11:52:42.901758       1 main.go:324] Node ha-604935-m03 has CIDR [10.244.2.0/24] 
	I1202 11:52:42.903817       1 main.go:297] Handling node with IPs: map[192.168.39.26:{}]
	I1202 11:52:42.903856       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6] <==
	I1202 11:46:32.842650       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 11:46:32.848385       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102]
	I1202 11:46:32.849164       1 controller.go:615] quota admission added evaluator for: endpoints
	I1202 11:46:32.859606       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 11:46:33.159098       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1202 11:46:34.294370       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1202 11:46:34.315176       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	http2: server: error reading preface from client 192.168.39.254:47786: read tcp 192.168.39.254:8443->192.168.39.254:47786: read: connection reset by peer
	I1202 11:46:34.492102       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1202 11:46:38.758671       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1202 11:46:38.805955       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1202 11:49:11.846753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54452: use of closed network connection
	E1202 11:49:12.028104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54460: use of closed network connection
	E1202 11:49:12.199806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54474: use of closed network connection
	E1202 11:49:12.392612       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54484: use of closed network connection
	E1202 11:49:12.562047       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54506: use of closed network connection
	E1202 11:49:12.747509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54530: use of closed network connection
	E1202 11:49:12.939816       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54544: use of closed network connection
	E1202 11:49:13.121199       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54562: use of closed network connection
	E1202 11:49:13.295085       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54584: use of closed network connection
	E1202 11:49:13.578607       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54612: use of closed network connection
	E1202 11:49:13.757972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54638: use of closed network connection
	E1202 11:49:14.099757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54676: use of closed network connection
	E1202 11:49:14.269710       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54694: use of closed network connection
	E1202 11:49:14.441652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54710: use of closed network connection
	
	
	==> kube-controller-manager [275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41] <==
	I1202 11:49:45.139269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.144540       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.233566       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.349805       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.679160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:47.939032       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-604935-m04"
	I1202 11:49:47.939241       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:47.969287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:49.605926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:49.681129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:55.357132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:04.214872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:04.215953       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-604935-m04"
	I1202 11:50:04.236833       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:04.619357       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:15.555711       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:51:05.313473       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	I1202 11:51:05.313596       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-604935-m04"
	I1202 11:51:05.338955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	I1202 11:51:05.387666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.010033ms"
	I1202 11:51:05.388828       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.832µs"
	I1202 11:51:05.441675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.06791ms"
	I1202 11:51:05.442993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.629µs"
	I1202 11:51:07.990253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	I1202 11:51:10.625653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	
	
	==> kube-proxy [f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1202 11:46:39.991996       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1202 11:46:40.020254       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	E1202 11:46:40.020650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 11:46:40.086409       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1202 11:46:40.086557       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 11:46:40.086602       1 server_linux.go:169] "Using iptables Proxier"
	I1202 11:46:40.089997       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 11:46:40.090696       1 server.go:483] "Version info" version="v1.31.2"
	I1202 11:46:40.090739       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 11:46:40.096206       1 config.go:199] "Starting service config controller"
	I1202 11:46:40.096522       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 11:46:40.096732       1 config.go:105] "Starting endpoint slice config controller"
	I1202 11:46:40.096763       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 11:46:40.098314       1 config.go:328] "Starting node config controller"
	I1202 11:46:40.099010       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 11:46:40.196939       1 shared_informer.go:320] Caches are synced for service config
	I1202 11:46:40.197006       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 11:46:40.199281       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35] <==
	W1202 11:46:32.142852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1202 11:46:32.142937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.153652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 11:46:32.153702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.221641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 11:46:32.221961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.358170       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1202 11:46:32.358291       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1202 11:46:32.429924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 11:46:32.430007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.430758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 11:46:32.430825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.449596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 11:46:32.449697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.505859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1202 11:46:32.505943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1202 11:46:34.815786       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1202 11:49:07.673886       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xbb9t\": pod busybox-7dff88458-xbb9t is already assigned to node \"ha-604935-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-xbb9t" node="ha-604935-m02"
	E1202 11:49:07.674510       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fc236bbd-f34b-454f-a66d-b369cd19cf9d(default/busybox-7dff88458-xbb9t) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-xbb9t"
	E1202 11:49:07.674758       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8jxc4\": pod busybox-7dff88458-8jxc4 is already assigned to node \"ha-604935\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8jxc4" node="ha-604935"
	E1202 11:49:07.675368       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb(default/busybox-7dff88458-8jxc4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8jxc4"
	E1202 11:49:07.675694       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8jxc4\": pod busybox-7dff88458-8jxc4 is already assigned to node \"ha-604935\"" pod="default/busybox-7dff88458-8jxc4"
	I1202 11:49:07.676018       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8jxc4" node="ha-604935"
	E1202 11:49:07.678080       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xbb9t\": pod busybox-7dff88458-xbb9t is already assigned to node \"ha-604935-m02\"" pod="default/busybox-7dff88458-xbb9t"
	I1202 11:49:07.679000       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-xbb9t" node="ha-604935-m02"
	
	
	==> kubelet <==
	Dec 02 11:51:34 ha-604935 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 11:51:34 ha-604935 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 11:51:34 ha-604935 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 11:51:34 ha-604935 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 11:51:34 ha-604935 kubelet[1316]: E1202 11:51:34.518783    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140294518371858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:34 ha-604935 kubelet[1316]: E1202 11:51:34.518905    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140294518371858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:44 ha-604935 kubelet[1316]: E1202 11:51:44.520250    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140304520009698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:44 ha-604935 kubelet[1316]: E1202 11:51:44.520275    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140304520009698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:54 ha-604935 kubelet[1316]: E1202 11:51:54.524305    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140314523474300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:54 ha-604935 kubelet[1316]: E1202 11:51:54.524384    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140314523474300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:04 ha-604935 kubelet[1316]: E1202 11:52:04.526662    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140324526379785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:04 ha-604935 kubelet[1316]: E1202 11:52:04.526711    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140324526379785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:14 ha-604935 kubelet[1316]: E1202 11:52:14.527977    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140334527643926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:14 ha-604935 kubelet[1316]: E1202 11:52:14.528325    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140334527643926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:24 ha-604935 kubelet[1316]: E1202 11:52:24.530019    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140344529552485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:24 ha-604935 kubelet[1316]: E1202 11:52:24.530407    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140344529552485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:34 ha-604935 kubelet[1316]: E1202 11:52:34.436289    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 02 11:52:34 ha-604935 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 11:52:34 ha-604935 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 11:52:34 ha-604935 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 11:52:34 ha-604935 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 11:52:34 ha-604935 kubelet[1316]: E1202 11:52:34.531571    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140354531272131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:34 ha-604935 kubelet[1316]: E1202 11:52:34.531618    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140354531272131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:44 ha-604935 kubelet[1316]: E1202 11:52:44.532768    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140364532554842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:44 ha-604935 kubelet[1316]: E1202 11:52:44.532808    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140364532554842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-604935 -n ha-604935
helpers_test.go:261: (dbg) Run:  kubectl --context ha-604935 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr: (4.09440226s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-604935 -n ha-604935
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-604935 logs -n 25: (1.327714982s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935:/home/docker/cp-test_ha-604935-m03_ha-604935.txt                       |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935 sudo cat                                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935.txt                                 |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m02:/home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m02 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04:/home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m04 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp testdata/cp-test.txt                                                | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1292724683/001/cp-test_ha-604935-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935:/home/docker/cp-test_ha-604935-m04_ha-604935.txt                       |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935 sudo cat                                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935.txt                                 |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m02:/home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m02 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03:/home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m03 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-604935 node stop m02 -v=7                                                     | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-604935 node start m02 -v=7                                                    | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:45:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:45:51.477333   23379 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:45:51.477429   23379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:51.477436   23379 out.go:358] Setting ErrFile to fd 2...
	I1202 11:45:51.477440   23379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:51.477579   23379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:45:51.478080   23379 out.go:352] Setting JSON to false
	I1202 11:45:51.478853   23379 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1703,"bootTime":1733138248,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:45:51.478907   23379 start.go:139] virtualization: kvm guest
	I1202 11:45:51.480873   23379 out.go:177] * [ha-604935] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:45:51.482060   23379 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:45:51.482068   23379 notify.go:220] Checking for updates...
	I1202 11:45:51.484245   23379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:45:51.485502   23379 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:45:51.486630   23379 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:51.487842   23379 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:45:51.488928   23379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:45:51.490194   23379 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:45:51.523210   23379 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 11:45:51.524197   23379 start.go:297] selected driver: kvm2
	I1202 11:45:51.524207   23379 start.go:901] validating driver "kvm2" against <nil>
	I1202 11:45:51.524217   23379 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:45:51.524886   23379 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:45:51.524953   23379 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 11:45:51.538752   23379 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 11:45:51.538805   23379 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 11:45:51.539057   23379 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:45:51.539096   23379 cni.go:84] Creating CNI manager for ""
	I1202 11:45:51.539154   23379 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1202 11:45:51.539162   23379 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 11:45:51.539222   23379 start.go:340] cluster config:
	{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1202 11:45:51.539330   23379 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:45:51.540849   23379 out.go:177] * Starting "ha-604935" primary control-plane node in "ha-604935" cluster
	I1202 11:45:51.542035   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:45:51.542064   23379 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:45:51.542073   23379 cache.go:56] Caching tarball of preloaded images
	I1202 11:45:51.542155   23379 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:45:51.542168   23379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:45:51.542474   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:45:51.542495   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json: {Name:mkd56e76e09e18927ad08e110fcb7c73441ee1fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:45:51.542653   23379 start.go:360] acquireMachinesLock for ha-604935: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:45:51.542690   23379 start.go:364] duration metric: took 21.87µs to acquireMachinesLock for "ha-604935"
	I1202 11:45:51.542712   23379 start.go:93] Provisioning new machine with config: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:45:51.542769   23379 start.go:125] createHost starting for "" (driver="kvm2")
	I1202 11:45:51.544215   23379 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 11:45:51.544376   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:45:51.544410   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:45:51.558068   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
	I1202 11:45:51.558542   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:45:51.559117   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:45:51.559144   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:45:51.559441   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:45:51.559624   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:45:51.559747   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:45:51.559887   23379 start.go:159] libmachine.API.Create for "ha-604935" (driver="kvm2")
	I1202 11:45:51.559913   23379 client.go:168] LocalClient.Create starting
	I1202 11:45:51.559938   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:45:51.559978   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:45:51.559999   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:45:51.560059   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:45:51.560086   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:45:51.560103   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:45:51.560134   23379 main.go:141] libmachine: Running pre-create checks...
	I1202 11:45:51.560147   23379 main.go:141] libmachine: (ha-604935) Calling .PreCreateCheck
	I1202 11:45:51.560467   23379 main.go:141] libmachine: (ha-604935) Calling .GetConfigRaw
	I1202 11:45:51.560846   23379 main.go:141] libmachine: Creating machine...
	I1202 11:45:51.560861   23379 main.go:141] libmachine: (ha-604935) Calling .Create
	I1202 11:45:51.560982   23379 main.go:141] libmachine: (ha-604935) Creating KVM machine...
	I1202 11:45:51.562114   23379 main.go:141] libmachine: (ha-604935) DBG | found existing default KVM network
	I1202 11:45:51.562698   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:51.562571   23402 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002231e0}
	I1202 11:45:51.562725   23379 main.go:141] libmachine: (ha-604935) DBG | created network xml: 
	I1202 11:45:51.562738   23379 main.go:141] libmachine: (ha-604935) DBG | <network>
	I1202 11:45:51.562750   23379 main.go:141] libmachine: (ha-604935) DBG |   <name>mk-ha-604935</name>
	I1202 11:45:51.562762   23379 main.go:141] libmachine: (ha-604935) DBG |   <dns enable='no'/>
	I1202 11:45:51.562773   23379 main.go:141] libmachine: (ha-604935) DBG |   
	I1202 11:45:51.562781   23379 main.go:141] libmachine: (ha-604935) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1202 11:45:51.562793   23379 main.go:141] libmachine: (ha-604935) DBG |     <dhcp>
	I1202 11:45:51.562803   23379 main.go:141] libmachine: (ha-604935) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1202 11:45:51.562814   23379 main.go:141] libmachine: (ha-604935) DBG |     </dhcp>
	I1202 11:45:51.562827   23379 main.go:141] libmachine: (ha-604935) DBG |   </ip>
	I1202 11:45:51.562839   23379 main.go:141] libmachine: (ha-604935) DBG |   
	I1202 11:45:51.562849   23379 main.go:141] libmachine: (ha-604935) DBG | </network>
	I1202 11:45:51.562861   23379 main.go:141] libmachine: (ha-604935) DBG | 
	I1202 11:45:51.567359   23379 main.go:141] libmachine: (ha-604935) DBG | trying to create private KVM network mk-ha-604935 192.168.39.0/24...
	I1202 11:45:51.627851   23379 main.go:141] libmachine: (ha-604935) DBG | private KVM network mk-ha-604935 192.168.39.0/24 created
	I1202 11:45:51.627878   23379 main.go:141] libmachine: (ha-604935) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935 ...
	I1202 11:45:51.627909   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:51.627845   23402 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:51.627936   23379 main.go:141] libmachine: (ha-604935) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:45:51.627956   23379 main.go:141] libmachine: (ha-604935) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:45:51.873906   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:51.873783   23402 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa...
	I1202 11:45:52.258389   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:52.258298   23402 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/ha-604935.rawdisk...
	I1202 11:45:52.258412   23379 main.go:141] libmachine: (ha-604935) DBG | Writing magic tar header
	I1202 11:45:52.258421   23379 main.go:141] libmachine: (ha-604935) DBG | Writing SSH key tar header
	I1202 11:45:52.258433   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:52.258404   23402 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935 ...
	I1202 11:45:52.258549   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935
	I1202 11:45:52.258587   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:45:52.258600   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935 (perms=drwx------)
	I1202 11:45:52.258612   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:45:52.258622   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:45:52.258639   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:45:52.258670   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:45:52.258686   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:52.258699   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:45:52.258711   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:45:52.258726   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:45:52.258742   23379 main.go:141] libmachine: (ha-604935) Creating domain...
	I1202 11:45:52.258748   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:45:52.258755   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home
	I1202 11:45:52.258760   23379 main.go:141] libmachine: (ha-604935) DBG | Skipping /home - not owner
	I1202 11:45:52.259679   23379 main.go:141] libmachine: (ha-604935) define libvirt domain using xml: 
	I1202 11:45:52.259691   23379 main.go:141] libmachine: (ha-604935) <domain type='kvm'>
	I1202 11:45:52.259699   23379 main.go:141] libmachine: (ha-604935)   <name>ha-604935</name>
	I1202 11:45:52.259718   23379 main.go:141] libmachine: (ha-604935)   <memory unit='MiB'>2200</memory>
	I1202 11:45:52.259726   23379 main.go:141] libmachine: (ha-604935)   <vcpu>2</vcpu>
	I1202 11:45:52.259737   23379 main.go:141] libmachine: (ha-604935)   <features>
	I1202 11:45:52.259745   23379 main.go:141] libmachine: (ha-604935)     <acpi/>
	I1202 11:45:52.259755   23379 main.go:141] libmachine: (ha-604935)     <apic/>
	I1202 11:45:52.259762   23379 main.go:141] libmachine: (ha-604935)     <pae/>
	I1202 11:45:52.259776   23379 main.go:141] libmachine: (ha-604935)     
	I1202 11:45:52.259792   23379 main.go:141] libmachine: (ha-604935)   </features>
	I1202 11:45:52.259808   23379 main.go:141] libmachine: (ha-604935)   <cpu mode='host-passthrough'>
	I1202 11:45:52.259826   23379 main.go:141] libmachine: (ha-604935)   
	I1202 11:45:52.259835   23379 main.go:141] libmachine: (ha-604935)   </cpu>
	I1202 11:45:52.259843   23379 main.go:141] libmachine: (ha-604935)   <os>
	I1202 11:45:52.259851   23379 main.go:141] libmachine: (ha-604935)     <type>hvm</type>
	I1202 11:45:52.259863   23379 main.go:141] libmachine: (ha-604935)     <boot dev='cdrom'/>
	I1202 11:45:52.259871   23379 main.go:141] libmachine: (ha-604935)     <boot dev='hd'/>
	I1202 11:45:52.259896   23379 main.go:141] libmachine: (ha-604935)     <bootmenu enable='no'/>
	I1202 11:45:52.259912   23379 main.go:141] libmachine: (ha-604935)   </os>
	I1202 11:45:52.259917   23379 main.go:141] libmachine: (ha-604935)   <devices>
	I1202 11:45:52.259925   23379 main.go:141] libmachine: (ha-604935)     <disk type='file' device='cdrom'>
	I1202 11:45:52.259935   23379 main.go:141] libmachine: (ha-604935)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/boot2docker.iso'/>
	I1202 11:45:52.259939   23379 main.go:141] libmachine: (ha-604935)       <target dev='hdc' bus='scsi'/>
	I1202 11:45:52.259944   23379 main.go:141] libmachine: (ha-604935)       <readonly/>
	I1202 11:45:52.259951   23379 main.go:141] libmachine: (ha-604935)     </disk>
	I1202 11:45:52.259956   23379 main.go:141] libmachine: (ha-604935)     <disk type='file' device='disk'>
	I1202 11:45:52.259963   23379 main.go:141] libmachine: (ha-604935)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:45:52.259970   23379 main.go:141] libmachine: (ha-604935)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/ha-604935.rawdisk'/>
	I1202 11:45:52.259978   23379 main.go:141] libmachine: (ha-604935)       <target dev='hda' bus='virtio'/>
	I1202 11:45:52.259982   23379 main.go:141] libmachine: (ha-604935)     </disk>
	I1202 11:45:52.259992   23379 main.go:141] libmachine: (ha-604935)     <interface type='network'>
	I1202 11:45:52.260000   23379 main.go:141] libmachine: (ha-604935)       <source network='mk-ha-604935'/>
	I1202 11:45:52.260004   23379 main.go:141] libmachine: (ha-604935)       <model type='virtio'/>
	I1202 11:45:52.260011   23379 main.go:141] libmachine: (ha-604935)     </interface>
	I1202 11:45:52.260015   23379 main.go:141] libmachine: (ha-604935)     <interface type='network'>
	I1202 11:45:52.260020   23379 main.go:141] libmachine: (ha-604935)       <source network='default'/>
	I1202 11:45:52.260026   23379 main.go:141] libmachine: (ha-604935)       <model type='virtio'/>
	I1202 11:45:52.260031   23379 main.go:141] libmachine: (ha-604935)     </interface>
	I1202 11:45:52.260035   23379 main.go:141] libmachine: (ha-604935)     <serial type='pty'>
	I1202 11:45:52.260040   23379 main.go:141] libmachine: (ha-604935)       <target port='0'/>
	I1202 11:45:52.260045   23379 main.go:141] libmachine: (ha-604935)     </serial>
	I1202 11:45:52.260050   23379 main.go:141] libmachine: (ha-604935)     <console type='pty'>
	I1202 11:45:52.260059   23379 main.go:141] libmachine: (ha-604935)       <target type='serial' port='0'/>
	I1202 11:45:52.260081   23379 main.go:141] libmachine: (ha-604935)     </console>
	I1202 11:45:52.260097   23379 main.go:141] libmachine: (ha-604935)     <rng model='virtio'>
	I1202 11:45:52.260105   23379 main.go:141] libmachine: (ha-604935)       <backend model='random'>/dev/random</backend>
	I1202 11:45:52.260113   23379 main.go:141] libmachine: (ha-604935)     </rng>
	I1202 11:45:52.260119   23379 main.go:141] libmachine: (ha-604935)     
	I1202 11:45:52.260131   23379 main.go:141] libmachine: (ha-604935)     
	I1202 11:45:52.260139   23379 main.go:141] libmachine: (ha-604935)   </devices>
	I1202 11:45:52.260142   23379 main.go:141] libmachine: (ha-604935) </domain>
	I1202 11:45:52.260148   23379 main.go:141] libmachine: (ha-604935) 
	I1202 11:45:52.264453   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e2:c6:db in network default
	I1202 11:45:52.264963   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:52.264976   23379 main.go:141] libmachine: (ha-604935) Ensuring networks are active...
	I1202 11:45:52.265536   23379 main.go:141] libmachine: (ha-604935) Ensuring network default is active
	I1202 11:45:52.265809   23379 main.go:141] libmachine: (ha-604935) Ensuring network mk-ha-604935 is active
	I1202 11:45:52.266301   23379 main.go:141] libmachine: (ha-604935) Getting domain xml...
	I1202 11:45:52.266972   23379 main.go:141] libmachine: (ha-604935) Creating domain...
	I1202 11:45:53.425942   23379 main.go:141] libmachine: (ha-604935) Waiting to get IP...
	I1202 11:45:53.426812   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:53.427160   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:53.427221   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:53.427145   23402 retry.go:31] will retry after 201.077519ms: waiting for machine to come up
	I1202 11:45:53.629564   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:53.629950   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:53.629976   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:53.629910   23402 retry.go:31] will retry after 339.273732ms: waiting for machine to come up
	I1202 11:45:53.970328   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:53.970740   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:53.970764   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:53.970705   23402 retry.go:31] will retry after 350.772564ms: waiting for machine to come up
	I1202 11:45:54.323244   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:54.323628   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:54.323652   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:54.323595   23402 retry.go:31] will retry after 510.154735ms: waiting for machine to come up
	I1202 11:45:54.834818   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:54.835184   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:54.835211   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:54.835141   23402 retry.go:31] will retry after 497.813223ms: waiting for machine to come up
	I1202 11:45:55.334326   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:55.334697   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:55.334728   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:55.334631   23402 retry.go:31] will retry after 593.538742ms: waiting for machine to come up
	I1202 11:45:55.929133   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:55.929547   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:55.929575   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:55.929508   23402 retry.go:31] will retry after 1.005519689s: waiting for machine to come up
	I1202 11:45:56.936100   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:56.936549   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:56.936581   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:56.936492   23402 retry.go:31] will retry after 1.273475187s: waiting for machine to come up
	I1202 11:45:58.211849   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:58.212240   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:58.212280   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:58.212213   23402 retry.go:31] will retry after 1.292529083s: waiting for machine to come up
	I1202 11:45:59.506572   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:59.506909   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:59.506934   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:59.506880   23402 retry.go:31] will retry after 1.800735236s: waiting for machine to come up
	I1202 11:46:01.309936   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:01.310447   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:01.310467   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:01.310416   23402 retry.go:31] will retry after 2.83980414s: waiting for machine to come up
	I1202 11:46:04.153261   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:04.153728   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:04.153748   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:04.153704   23402 retry.go:31] will retry after 2.497515599s: waiting for machine to come up
	I1202 11:46:06.652765   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:06.653095   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:06.653119   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:06.653068   23402 retry.go:31] will retry after 2.762441656s: waiting for machine to come up
	I1202 11:46:09.418859   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:09.419194   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:09.419220   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:09.419149   23402 retry.go:31] will retry after 3.896839408s: waiting for machine to come up
	I1202 11:46:13.318223   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:13.318677   23379 main.go:141] libmachine: (ha-604935) Found IP for machine: 192.168.39.102
	I1202 11:46:13.318696   23379 main.go:141] libmachine: (ha-604935) Reserving static IP address...
	I1202 11:46:13.318709   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has current primary IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:13.318957   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find host DHCP lease matching {name: "ha-604935", mac: "52:54:00:e0:fa:7c", ip: "192.168.39.102"} in network mk-ha-604935
	I1202 11:46:13.386650   23379 main.go:141] libmachine: (ha-604935) DBG | Getting to WaitForSSH function...
	I1202 11:46:13.386676   23379 main.go:141] libmachine: (ha-604935) Reserved static IP address: 192.168.39.102
	I1202 11:46:13.386705   23379 main.go:141] libmachine: (ha-604935) Waiting for SSH to be available...
	I1202 11:46:13.389178   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:13.389540   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935
	I1202 11:46:13.389567   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find defined IP address of network mk-ha-604935 interface with MAC address 52:54:00:e0:fa:7c
	I1202 11:46:13.389737   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH client type: external
	I1202 11:46:13.389771   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa (-rw-------)
	I1202 11:46:13.389833   23379 main.go:141] libmachine: (ha-604935) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:46:13.389853   23379 main.go:141] libmachine: (ha-604935) DBG | About to run SSH command:
	I1202 11:46:13.389865   23379 main.go:141] libmachine: (ha-604935) DBG | exit 0
	I1202 11:46:13.393280   23379 main.go:141] libmachine: (ha-604935) DBG | SSH cmd err, output: exit status 255: 
	I1202 11:46:13.393302   23379 main.go:141] libmachine: (ha-604935) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1202 11:46:13.393311   23379 main.go:141] libmachine: (ha-604935) DBG | command : exit 0
	I1202 11:46:13.393319   23379 main.go:141] libmachine: (ha-604935) DBG | err     : exit status 255
	I1202 11:46:13.393329   23379 main.go:141] libmachine: (ha-604935) DBG | output  : 
	I1202 11:46:16.395489   23379 main.go:141] libmachine: (ha-604935) DBG | Getting to WaitForSSH function...
	I1202 11:46:16.397696   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.398004   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.398035   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.398057   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH client type: external
	I1202 11:46:16.398092   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa (-rw-------)
	I1202 11:46:16.398150   23379 main.go:141] libmachine: (ha-604935) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:46:16.398173   23379 main.go:141] libmachine: (ha-604935) DBG | About to run SSH command:
	I1202 11:46:16.398186   23379 main.go:141] libmachine: (ha-604935) DBG | exit 0
	I1202 11:46:16.524025   23379 main.go:141] libmachine: (ha-604935) DBG | SSH cmd err, output: <nil>: 
	I1202 11:46:16.524319   23379 main.go:141] libmachine: (ha-604935) KVM machine creation complete!
	I1202 11:46:16.524585   23379 main.go:141] libmachine: (ha-604935) Calling .GetConfigRaw
	I1202 11:46:16.525132   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:16.525296   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:16.525429   23379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:46:16.525444   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:16.526494   23379 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:46:16.526509   23379 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:46:16.526516   23379 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:46:16.526523   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.528453   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.528856   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.528879   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.529035   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.529215   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.529389   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.529537   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.529694   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.529924   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.529940   23379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:46:16.639198   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:46:16.639221   23379 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:46:16.639229   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.641755   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.642065   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.642082   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.642197   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.642389   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.642587   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.642718   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.642866   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.643032   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.643046   23379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:46:16.748649   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:46:16.748721   23379 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:46:16.748732   23379 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:46:16.748738   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:46:16.748943   23379 buildroot.go:166] provisioning hostname "ha-604935"
	I1202 11:46:16.748965   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:46:16.749139   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.751455   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.751828   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.751862   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.751971   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.752141   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.752285   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.752419   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.752578   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.752754   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.752769   23379 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935 && echo "ha-604935" | sudo tee /etc/hostname
	I1202 11:46:16.869057   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935
	
	I1202 11:46:16.869084   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.871187   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.871464   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.871482   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.871651   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.871810   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.871940   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.872049   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.872201   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.872396   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.872412   23379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:46:16.984630   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:46:16.984655   23379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:46:16.984684   23379 buildroot.go:174] setting up certificates
	I1202 11:46:16.984696   23379 provision.go:84] configureAuth start
	I1202 11:46:16.984709   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:46:16.984946   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:16.987426   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.987732   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.987755   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.987901   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.989843   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.990098   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.990122   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.990257   23379 provision.go:143] copyHostCerts
	I1202 11:46:16.990285   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:46:16.990325   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:46:16.990334   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:46:16.990403   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:46:16.990485   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:46:16.990508   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:46:16.990522   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:46:16.990547   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:46:16.990600   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:46:16.990616   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:46:16.990622   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:46:16.990641   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:46:16.990697   23379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935 san=[127.0.0.1 192.168.39.102 ha-604935 localhost minikube]
	I1202 11:46:17.091711   23379 provision.go:177] copyRemoteCerts
	I1202 11:46:17.091762   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:46:17.091783   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.093867   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.094147   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.094176   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.094310   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.094467   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.094595   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.094701   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.178212   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:46:17.178264   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:46:17.201820   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:46:17.201876   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:46:17.224492   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:46:17.224550   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1202 11:46:17.246969   23379 provision.go:87] duration metric: took 262.263543ms to configureAuth
	I1202 11:46:17.246987   23379 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:46:17.247165   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:46:17.247239   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.249583   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.249877   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.249899   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.250032   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.250183   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.250315   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.250423   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.250529   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:17.250670   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:17.250686   23379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:46:17.469650   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:46:17.469676   23379 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:46:17.469685   23379 main.go:141] libmachine: (ha-604935) Calling .GetURL
	I1202 11:46:17.470859   23379 main.go:141] libmachine: (ha-604935) DBG | Using libvirt version 6000000
	I1202 11:46:17.472792   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.473049   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.473078   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.473161   23379 main.go:141] libmachine: Docker is up and running!
	I1202 11:46:17.473172   23379 main.go:141] libmachine: Reticulating splines...
	I1202 11:46:17.473179   23379 client.go:171] duration metric: took 25.91325953s to LocalClient.Create
	I1202 11:46:17.473201   23379 start.go:167] duration metric: took 25.913314916s to libmachine.API.Create "ha-604935"
	I1202 11:46:17.473214   23379 start.go:293] postStartSetup for "ha-604935" (driver="kvm2")
	I1202 11:46:17.473228   23379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:46:17.473243   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.473431   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:46:17.473460   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.475686   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.475977   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.476003   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.476117   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.476292   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.476424   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.476570   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.558504   23379 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:46:17.562731   23379 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:46:17.562753   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:46:17.562801   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:46:17.562870   23379 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:46:17.562886   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:46:17.562973   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:46:17.572589   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:46:17.596338   23379 start.go:296] duration metric: took 123.108175ms for postStartSetup
	I1202 11:46:17.596385   23379 main.go:141] libmachine: (ha-604935) Calling .GetConfigRaw
	I1202 11:46:17.596933   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:17.599535   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.599863   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.599888   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.600036   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:46:17.600197   23379 start.go:128] duration metric: took 26.057419293s to createHost
	I1202 11:46:17.600216   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.602393   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.602679   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.602700   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.602888   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.603033   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.603150   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.603243   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.603351   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:17.603548   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:17.603565   23379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:46:17.708694   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733139977.687468447
	
	I1202 11:46:17.708715   23379 fix.go:216] guest clock: 1733139977.687468447
	I1202 11:46:17.708724   23379 fix.go:229] Guest: 2024-12-02 11:46:17.687468447 +0000 UTC Remote: 2024-12-02 11:46:17.600208028 +0000 UTC m=+26.158965969 (delta=87.260419ms)
	I1202 11:46:17.708747   23379 fix.go:200] guest clock delta is within tolerance: 87.260419ms
	I1202 11:46:17.708757   23379 start.go:83] releasing machines lock for "ha-604935", held for 26.166055586s
	I1202 11:46:17.708779   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.708992   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:17.711541   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.711821   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.711843   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.711972   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.712458   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.712646   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.712736   23379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:46:17.712776   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.712829   23379 ssh_runner.go:195] Run: cat /version.json
	I1202 11:46:17.712853   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.715060   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.715759   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.715798   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.715960   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.716014   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.716187   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.716313   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.716339   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.716347   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.716430   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.716502   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.716582   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.716706   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.716827   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.792614   23379 ssh_runner.go:195] Run: systemctl --version
	I1202 11:46:17.813470   23379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:46:17.973535   23379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:46:17.979920   23379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:46:17.979975   23379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:46:17.995437   23379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:46:17.995459   23379 start.go:495] detecting cgroup driver to use...
	I1202 11:46:17.995503   23379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:46:18.012152   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:46:18.026749   23379 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:46:18.026813   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:46:18.040895   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:46:18.054867   23379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:46:18.182673   23379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:46:18.307537   23379 docker.go:233] disabling docker service ...
	I1202 11:46:18.307608   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:46:18.321854   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:46:18.334016   23379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:46:18.463785   23379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:46:18.581750   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:46:18.594915   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:46:18.612956   23379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:46:18.613013   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.623443   23379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:46:18.623494   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.633789   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.643912   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.654023   23379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:46:18.664581   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.674994   23379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.691561   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.701797   23379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:46:18.711042   23379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:46:18.711090   23379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:46:18.724638   23379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:46:18.733743   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:46:18.862034   23379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:46:18.949557   23379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:46:18.949630   23379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:46:18.954402   23379 start.go:563] Will wait 60s for crictl version
	I1202 11:46:18.954482   23379 ssh_runner.go:195] Run: which crictl
	I1202 11:46:18.958128   23379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:46:18.997454   23379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:46:18.997519   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:46:19.025104   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:46:19.055599   23379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:46:19.056875   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:19.059223   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:19.059530   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:19.059555   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:19.059704   23379 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:46:19.063855   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:46:19.078703   23379 kubeadm.go:883] updating cluster {Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 11:46:19.078793   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:46:19.078828   23379 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:46:19.116305   23379 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 11:46:19.116376   23379 ssh_runner.go:195] Run: which lz4
	I1202 11:46:19.120271   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1202 11:46:19.120778   23379 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 11:46:19.126218   23379 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 11:46:19.126239   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 11:46:20.425373   23379 crio.go:462] duration metric: took 1.305048201s to copy over tarball
	I1202 11:46:20.425452   23379 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 11:46:22.441192   23379 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.01571139s)
	I1202 11:46:22.441225   23379 crio.go:469] duration metric: took 2.015821089s to extract the tarball
	I1202 11:46:22.441233   23379 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 11:46:22.478991   23379 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:46:22.530052   23379 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:46:22.530074   23379 cache_images.go:84] Images are preloaded, skipping loading
	I1202 11:46:22.530083   23379 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.2 crio true true} ...
	I1202 11:46:22.530186   23379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:46:22.530263   23379 ssh_runner.go:195] Run: crio config
	I1202 11:46:22.572985   23379 cni.go:84] Creating CNI manager for ""
	I1202 11:46:22.573005   23379 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1202 11:46:22.573014   23379 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 11:46:22.573034   23379 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-604935 NodeName:ha-604935 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 11:46:22.573152   23379 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-604935"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.102"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 11:46:22.573183   23379 kube-vip.go:115] generating kube-vip config ...
	I1202 11:46:22.573233   23379 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:46:22.589221   23379 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:46:22.589338   23379 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:46:22.589405   23379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:46:22.599190   23379 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:46:22.599242   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 11:46:22.608607   23379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1202 11:46:22.624652   23379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:46:22.640379   23379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1202 11:46:22.655900   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1202 11:46:22.671590   23379 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:46:22.675287   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:46:22.687449   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:46:22.815343   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:46:22.830770   23379 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.102
	I1202 11:46:22.830783   23379 certs.go:194] generating shared ca certs ...
	I1202 11:46:22.830798   23379 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:22.830938   23379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:46:22.830989   23379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:46:22.831001   23379 certs.go:256] generating profile certs ...
	I1202 11:46:22.831074   23379 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:46:22.831100   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt with IP's: []
	I1202 11:46:22.963911   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt ...
	I1202 11:46:22.963935   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt: {Name:mk5750a5db627315b9b01ec40b88a97f880b8d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:22.964093   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key ...
	I1202 11:46:22.964105   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key: {Name:mk12b4799c6c082b6ae6dcb6d50922caccda6be2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:22.964176   23379 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd
	I1202 11:46:22.964216   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I1202 11:46:23.245751   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd ...
	I1202 11:46:23.245777   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd: {Name:mkd02d0517ee36862fb48fa866d0eddc37aac5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.245919   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd ...
	I1202 11:46:23.245934   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd: {Name:mkafae41baf5ffd85374c686e8a6a230d6cd62ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.246014   23379 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:46:23.246102   23379 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:46:23.246163   23379 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:46:23.246178   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt with IP's: []
	I1202 11:46:23.398901   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt ...
	I1202 11:46:23.398937   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt: {Name:mk59ab7004f92d658850310a3f6a84461f824e18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.399105   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key ...
	I1202 11:46:23.399117   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key: {Name:mk4341731ba8ea8693d50dafd7cfc413608c74fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.399195   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:46:23.399214   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:46:23.399232   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:46:23.399248   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:46:23.399263   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:46:23.399278   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:46:23.399293   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:46:23.399307   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:46:23.399357   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:46:23.399393   23379 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:46:23.399404   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:46:23.399426   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:46:23.399453   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:46:23.399485   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:46:23.399528   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:46:23.399560   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.399576   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.399590   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.400135   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:46:23.425287   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:46:23.447899   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:46:23.470786   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:46:23.493867   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 11:46:23.517308   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 11:46:23.540273   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:46:23.562862   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:46:23.587751   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:46:23.615307   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:46:23.645819   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:46:23.670226   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 11:46:23.686120   23379 ssh_runner.go:195] Run: openssl version
	I1202 11:46:23.691724   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:46:23.702611   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.706991   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.707032   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.712771   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:46:23.723671   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:46:23.734402   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.738713   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.738746   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.744060   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:46:23.754804   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:46:23.765363   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.769594   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.769630   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.774953   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:46:23.785412   23379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:46:23.789341   23379 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:46:23.789402   23379 kubeadm.go:392] StartCluster: {Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:46:23.789461   23379 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 11:46:23.789507   23379 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 11:46:23.829185   23379 cri.go:89] found id: ""
	I1202 11:46:23.829258   23379 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 11:46:23.839482   23379 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 11:46:23.849018   23379 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 11:46:23.858723   23379 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 11:46:23.858741   23379 kubeadm.go:157] found existing configuration files:
	
	I1202 11:46:23.858784   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 11:46:23.867813   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 11:46:23.867858   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 11:46:23.877083   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 11:46:23.886137   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 11:46:23.886182   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 11:46:23.895526   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 11:46:23.904513   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 11:46:23.904574   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 11:46:23.913938   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 11:46:23.922913   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 11:46:23.922950   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 11:46:23.932249   23379 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 11:46:24.043553   23379 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 11:46:24.043623   23379 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 11:46:24.150207   23379 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 11:46:24.150352   23379 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 11:46:24.150497   23379 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 11:46:24.159626   23379 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 11:46:24.195667   23379 out.go:235]   - Generating certificates and keys ...
	I1202 11:46:24.195776   23379 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 11:46:24.195834   23379 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 11:46:24.358436   23379 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 11:46:24.683719   23379 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 11:46:24.943667   23379 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 11:46:25.032560   23379 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 11:46:25.140726   23379 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 11:46:25.140883   23379 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-604935 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1202 11:46:25.414720   23379 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 11:46:25.414972   23379 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-604935 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1202 11:46:25.596308   23379 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 11:46:25.682848   23379 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 11:46:25.908682   23379 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 11:46:25.908968   23379 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 11:46:26.057865   23379 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 11:46:26.190529   23379 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 11:46:26.320151   23379 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 11:46:26.522118   23379 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 11:46:26.687579   23379 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 11:46:26.688353   23379 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 11:46:26.693709   23379 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 11:46:26.695397   23379 out.go:235]   - Booting up control plane ...
	I1202 11:46:26.695494   23379 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 11:46:26.695563   23379 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 11:46:26.696118   23379 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 11:46:26.712309   23379 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 11:46:26.721469   23379 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 11:46:26.721525   23379 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 11:46:26.849672   23379 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 11:46:26.849831   23379 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 11:46:27.850918   23379 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001821143s
	I1202 11:46:27.850997   23379 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 11:46:33.482873   23379 kubeadm.go:310] [api-check] The API server is healthy after 5.633037057s
	I1202 11:46:33.492749   23379 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 11:46:33.512336   23379 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 11:46:34.037238   23379 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 11:46:34.037452   23379 kubeadm.go:310] [mark-control-plane] Marking the node ha-604935 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 11:46:34.050856   23379 kubeadm.go:310] [bootstrap-token] Using token: 8kw29b.di3rsap6xz9ot94t
	I1202 11:46:34.052035   23379 out.go:235]   - Configuring RBAC rules ...
	I1202 11:46:34.052182   23379 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 11:46:34.058440   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 11:46:34.073861   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 11:46:34.076499   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 11:46:34.079628   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 11:46:34.084760   23379 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 11:46:34.097556   23379 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 11:46:34.326607   23379 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 11:46:34.887901   23379 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 11:46:34.889036   23379 kubeadm.go:310] 
	I1202 11:46:34.889140   23379 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 11:46:34.889169   23379 kubeadm.go:310] 
	I1202 11:46:34.889273   23379 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 11:46:34.889281   23379 kubeadm.go:310] 
	I1202 11:46:34.889308   23379 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 11:46:34.889389   23379 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 11:46:34.889465   23379 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 11:46:34.889475   23379 kubeadm.go:310] 
	I1202 11:46:34.889554   23379 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 11:46:34.889564   23379 kubeadm.go:310] 
	I1202 11:46:34.889639   23379 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 11:46:34.889649   23379 kubeadm.go:310] 
	I1202 11:46:34.889720   23379 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 11:46:34.889845   23379 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 11:46:34.889909   23379 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 11:46:34.889916   23379 kubeadm.go:310] 
	I1202 11:46:34.889990   23379 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 11:46:34.890073   23379 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 11:46:34.890084   23379 kubeadm.go:310] 
	I1202 11:46:34.890170   23379 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8kw29b.di3rsap6xz9ot94t \
	I1202 11:46:34.890282   23379 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 11:46:34.890321   23379 kubeadm.go:310] 	--control-plane 
	I1202 11:46:34.890328   23379 kubeadm.go:310] 
	I1202 11:46:34.890409   23379 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 11:46:34.890416   23379 kubeadm.go:310] 
	I1202 11:46:34.890483   23379 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8kw29b.di3rsap6xz9ot94t \
	I1202 11:46:34.890568   23379 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 11:46:34.891577   23379 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 11:46:34.891597   23379 cni.go:84] Creating CNI manager for ""
	I1202 11:46:34.891603   23379 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1202 11:46:34.892960   23379 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1202 11:46:34.893988   23379 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 11:46:34.899231   23379 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1202 11:46:34.899255   23379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 11:46:34.917969   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 11:46:35.272118   23379 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 11:46:35.272198   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:35.272259   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-604935 minikube.k8s.io/updated_at=2024_12_02T11_46_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=ha-604935 minikube.k8s.io/primary=true
	I1202 11:46:35.310028   23379 ops.go:34] apiserver oom_adj: -16
	I1202 11:46:35.408095   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:35.908268   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:36.408944   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:36.909158   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:37.408454   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:37.909038   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:38.408700   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:38.908314   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:39.023834   23379 kubeadm.go:1113] duration metric: took 3.751689624s to wait for elevateKubeSystemPrivileges
	I1202 11:46:39.023871   23379 kubeadm.go:394] duration metric: took 15.234471878s to StartCluster
	I1202 11:46:39.023890   23379 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:39.023968   23379 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:46:39.024843   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:39.025096   23379 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:46:39.025129   23379 start.go:241] waiting for startup goroutines ...
	I1202 11:46:39.025139   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 11:46:39.025146   23379 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 11:46:39.025247   23379 addons.go:69] Setting storage-provisioner=true in profile "ha-604935"
	I1202 11:46:39.025268   23379 addons.go:234] Setting addon storage-provisioner=true in "ha-604935"
	I1202 11:46:39.025297   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:46:39.025365   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:46:39.025267   23379 addons.go:69] Setting default-storageclass=true in profile "ha-604935"
	I1202 11:46:39.025420   23379 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-604935"
	I1202 11:46:39.025726   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.025773   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.025867   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.025904   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.040510   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I1202 11:46:39.040567   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I1202 11:46:39.041007   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.041111   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.041500   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.041519   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.041642   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.041669   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.041855   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.042005   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.042156   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:39.042501   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.042547   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.044200   23379 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:46:39.044508   23379 kapi.go:59] client config for ha-604935: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 11:46:39.044954   23379 cert_rotation.go:140] Starting client certificate rotation controller
	I1202 11:46:39.045176   23379 addons.go:234] Setting addon default-storageclass=true in "ha-604935"
	I1202 11:46:39.045212   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:46:39.045509   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.045548   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.056740   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I1202 11:46:39.057180   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.057736   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.057761   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.058043   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.058254   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:39.059103   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I1202 11:46:39.059506   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.059989   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.060003   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.060030   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:39.060305   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.060780   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.060821   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.061507   23379 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 11:46:39.062672   23379 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:46:39.062687   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 11:46:39.062700   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:39.065792   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.066230   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:39.066257   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.066378   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:39.066549   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:39.066694   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:39.066850   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:39.076289   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32769
	I1202 11:46:39.076690   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.077099   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.077122   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.077418   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.077579   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:39.079081   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:39.079273   23379 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 11:46:39.079287   23379 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 11:46:39.079300   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:39.082143   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.082579   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:39.082597   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.082752   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:39.082910   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:39.083074   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:39.083219   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:39.138927   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 11:46:39.202502   23379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:46:39.264780   23379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 11:46:39.722155   23379 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1202 11:46:39.944980   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945000   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945116   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945141   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945269   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945284   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945292   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945298   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945459   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945489   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945500   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945513   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945457   23379 main.go:141] libmachine: (ha-604935) DBG | Closing plugin on server side
	I1202 11:46:39.945578   23379 main.go:141] libmachine: (ha-604935) DBG | Closing plugin on server side
	I1202 11:46:39.945581   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945620   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945796   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945844   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945933   23379 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 11:46:39.945977   23379 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 11:46:39.945813   23379 main.go:141] libmachine: (ha-604935) DBG | Closing plugin on server side
	I1202 11:46:39.946087   23379 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1202 11:46:39.946099   23379 round_trippers.go:469] Request Headers:
	I1202 11:46:39.946109   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:46:39.946117   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:46:39.963939   23379 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1202 11:46:39.964651   23379 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1202 11:46:39.964667   23379 round_trippers.go:469] Request Headers:
	I1202 11:46:39.964677   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:46:39.964684   23379 round_trippers.go:473]     Content-Type: application/json
	I1202 11:46:39.964689   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:46:39.968484   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:46:39.968627   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.968639   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.968886   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.968902   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.970238   23379 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1202 11:46:39.971383   23379 addons.go:510] duration metric: took 946.244666ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 11:46:39.971420   23379 start.go:246] waiting for cluster config update ...
	I1202 11:46:39.971435   23379 start.go:255] writing updated cluster config ...
	I1202 11:46:39.972900   23379 out.go:201] 
	I1202 11:46:39.974083   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:46:39.974147   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:46:39.975564   23379 out.go:177] * Starting "ha-604935-m02" control-plane node in "ha-604935" cluster
	I1202 11:46:39.976682   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:46:39.976701   23379 cache.go:56] Caching tarball of preloaded images
	I1202 11:46:39.976788   23379 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:46:39.976800   23379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:46:39.976872   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:46:39.977100   23379 start.go:360] acquireMachinesLock for ha-604935-m02: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:46:39.977152   23379 start.go:364] duration metric: took 22.26µs to acquireMachinesLock for "ha-604935-m02"
	I1202 11:46:39.977175   23379 start.go:93] Provisioning new machine with config: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:46:39.977250   23379 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1202 11:46:39.978689   23379 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 11:46:39.978765   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.978800   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.993356   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39439
	I1202 11:46:39.993775   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.994235   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.994266   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.994666   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.994881   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:46:39.995033   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:46:39.995225   23379 start.go:159] libmachine.API.Create for "ha-604935" (driver="kvm2")
	I1202 11:46:39.995256   23379 client.go:168] LocalClient.Create starting
	I1202 11:46:39.995293   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:46:39.995339   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:46:39.995364   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:46:39.995433   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:46:39.995460   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:46:39.995482   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:46:39.995508   23379 main.go:141] libmachine: Running pre-create checks...
	I1202 11:46:39.995520   23379 main.go:141] libmachine: (ha-604935-m02) Calling .PreCreateCheck
	I1202 11:46:39.995688   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetConfigRaw
	I1202 11:46:39.996035   23379 main.go:141] libmachine: Creating machine...
	I1202 11:46:39.996049   23379 main.go:141] libmachine: (ha-604935-m02) Calling .Create
	I1202 11:46:39.996158   23379 main.go:141] libmachine: (ha-604935-m02) Creating KVM machine...
	I1202 11:46:39.997515   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found existing default KVM network
	I1202 11:46:39.997667   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found existing private KVM network mk-ha-604935
	I1202 11:46:39.997862   23379 main.go:141] libmachine: (ha-604935-m02) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02 ...
	I1202 11:46:39.997894   23379 main.go:141] libmachine: (ha-604935-m02) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:46:39.997973   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:39.997863   23734 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:46:39.998066   23379 main.go:141] libmachine: (ha-604935-m02) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:46:40.246601   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:40.246459   23734 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa...
	I1202 11:46:40.345704   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:40.345606   23734 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/ha-604935-m02.rawdisk...
	I1202 11:46:40.345732   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Writing magic tar header
	I1202 11:46:40.345746   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Writing SSH key tar header
	I1202 11:46:40.345760   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:40.345732   23734 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02 ...
	I1202 11:46:40.345873   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02
	I1202 11:46:40.345899   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:46:40.345912   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02 (perms=drwx------)
	I1202 11:46:40.345936   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:46:40.345967   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:46:40.345981   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:46:40.345991   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:46:40.346001   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:46:40.346014   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home
	I1202 11:46:40.346025   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Skipping /home - not owner
	I1202 11:46:40.346072   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:46:40.346108   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:46:40.346124   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:46:40.346137   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:46:40.346162   23379 main.go:141] libmachine: (ha-604935-m02) Creating domain...
	I1202 11:46:40.346895   23379 main.go:141] libmachine: (ha-604935-m02) define libvirt domain using xml: 
	I1202 11:46:40.346916   23379 main.go:141] libmachine: (ha-604935-m02) <domain type='kvm'>
	I1202 11:46:40.346942   23379 main.go:141] libmachine: (ha-604935-m02)   <name>ha-604935-m02</name>
	I1202 11:46:40.346957   23379 main.go:141] libmachine: (ha-604935-m02)   <memory unit='MiB'>2200</memory>
	I1202 11:46:40.346974   23379 main.go:141] libmachine: (ha-604935-m02)   <vcpu>2</vcpu>
	I1202 11:46:40.346979   23379 main.go:141] libmachine: (ha-604935-m02)   <features>
	I1202 11:46:40.346986   23379 main.go:141] libmachine: (ha-604935-m02)     <acpi/>
	I1202 11:46:40.346990   23379 main.go:141] libmachine: (ha-604935-m02)     <apic/>
	I1202 11:46:40.346995   23379 main.go:141] libmachine: (ha-604935-m02)     <pae/>
	I1202 11:46:40.347001   23379 main.go:141] libmachine: (ha-604935-m02)     
	I1202 11:46:40.347008   23379 main.go:141] libmachine: (ha-604935-m02)   </features>
	I1202 11:46:40.347027   23379 main.go:141] libmachine: (ha-604935-m02)   <cpu mode='host-passthrough'>
	I1202 11:46:40.347034   23379 main.go:141] libmachine: (ha-604935-m02)   
	I1202 11:46:40.347038   23379 main.go:141] libmachine: (ha-604935-m02)   </cpu>
	I1202 11:46:40.347043   23379 main.go:141] libmachine: (ha-604935-m02)   <os>
	I1202 11:46:40.347049   23379 main.go:141] libmachine: (ha-604935-m02)     <type>hvm</type>
	I1202 11:46:40.347054   23379 main.go:141] libmachine: (ha-604935-m02)     <boot dev='cdrom'/>
	I1202 11:46:40.347060   23379 main.go:141] libmachine: (ha-604935-m02)     <boot dev='hd'/>
	I1202 11:46:40.347066   23379 main.go:141] libmachine: (ha-604935-m02)     <bootmenu enable='no'/>
	I1202 11:46:40.347072   23379 main.go:141] libmachine: (ha-604935-m02)   </os>
	I1202 11:46:40.347077   23379 main.go:141] libmachine: (ha-604935-m02)   <devices>
	I1202 11:46:40.347082   23379 main.go:141] libmachine: (ha-604935-m02)     <disk type='file' device='cdrom'>
	I1202 11:46:40.347089   23379 main.go:141] libmachine: (ha-604935-m02)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/boot2docker.iso'/>
	I1202 11:46:40.347096   23379 main.go:141] libmachine: (ha-604935-m02)       <target dev='hdc' bus='scsi'/>
	I1202 11:46:40.347101   23379 main.go:141] libmachine: (ha-604935-m02)       <readonly/>
	I1202 11:46:40.347105   23379 main.go:141] libmachine: (ha-604935-m02)     </disk>
	I1202 11:46:40.347111   23379 main.go:141] libmachine: (ha-604935-m02)     <disk type='file' device='disk'>
	I1202 11:46:40.347118   23379 main.go:141] libmachine: (ha-604935-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:46:40.347128   23379 main.go:141] libmachine: (ha-604935-m02)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/ha-604935-m02.rawdisk'/>
	I1202 11:46:40.347135   23379 main.go:141] libmachine: (ha-604935-m02)       <target dev='hda' bus='virtio'/>
	I1202 11:46:40.347140   23379 main.go:141] libmachine: (ha-604935-m02)     </disk>
	I1202 11:46:40.347144   23379 main.go:141] libmachine: (ha-604935-m02)     <interface type='network'>
	I1202 11:46:40.347152   23379 main.go:141] libmachine: (ha-604935-m02)       <source network='mk-ha-604935'/>
	I1202 11:46:40.347156   23379 main.go:141] libmachine: (ha-604935-m02)       <model type='virtio'/>
	I1202 11:46:40.347162   23379 main.go:141] libmachine: (ha-604935-m02)     </interface>
	I1202 11:46:40.347167   23379 main.go:141] libmachine: (ha-604935-m02)     <interface type='network'>
	I1202 11:46:40.347172   23379 main.go:141] libmachine: (ha-604935-m02)       <source network='default'/>
	I1202 11:46:40.347178   23379 main.go:141] libmachine: (ha-604935-m02)       <model type='virtio'/>
	I1202 11:46:40.347183   23379 main.go:141] libmachine: (ha-604935-m02)     </interface>
	I1202 11:46:40.347187   23379 main.go:141] libmachine: (ha-604935-m02)     <serial type='pty'>
	I1202 11:46:40.347194   23379 main.go:141] libmachine: (ha-604935-m02)       <target port='0'/>
	I1202 11:46:40.347204   23379 main.go:141] libmachine: (ha-604935-m02)     </serial>
	I1202 11:46:40.347211   23379 main.go:141] libmachine: (ha-604935-m02)     <console type='pty'>
	I1202 11:46:40.347221   23379 main.go:141] libmachine: (ha-604935-m02)       <target type='serial' port='0'/>
	I1202 11:46:40.347236   23379 main.go:141] libmachine: (ha-604935-m02)     </console>
	I1202 11:46:40.347247   23379 main.go:141] libmachine: (ha-604935-m02)     <rng model='virtio'>
	I1202 11:46:40.347255   23379 main.go:141] libmachine: (ha-604935-m02)       <backend model='random'>/dev/random</backend>
	I1202 11:46:40.347264   23379 main.go:141] libmachine: (ha-604935-m02)     </rng>
	I1202 11:46:40.347271   23379 main.go:141] libmachine: (ha-604935-m02)     
	I1202 11:46:40.347282   23379 main.go:141] libmachine: (ha-604935-m02)     
	I1202 11:46:40.347295   23379 main.go:141] libmachine: (ha-604935-m02)   </devices>
	I1202 11:46:40.347306   23379 main.go:141] libmachine: (ha-604935-m02) </domain>
	I1202 11:46:40.347319   23379 main.go:141] libmachine: (ha-604935-m02) 
	I1202 11:46:40.353726   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:2b:bd:2e in network default
	I1202 11:46:40.354276   23379 main.go:141] libmachine: (ha-604935-m02) Ensuring networks are active...
	I1202 11:46:40.354296   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:40.355011   23379 main.go:141] libmachine: (ha-604935-m02) Ensuring network default is active
	I1202 11:46:40.355333   23379 main.go:141] libmachine: (ha-604935-m02) Ensuring network mk-ha-604935 is active
	I1202 11:46:40.355771   23379 main.go:141] libmachine: (ha-604935-m02) Getting domain xml...
	I1202 11:46:40.356531   23379 main.go:141] libmachine: (ha-604935-m02) Creating domain...
	I1202 11:46:41.552192   23379 main.go:141] libmachine: (ha-604935-m02) Waiting to get IP...
	I1202 11:46:41.552923   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:41.553342   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:41.553365   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:41.553311   23734 retry.go:31] will retry after 250.26239ms: waiting for machine to come up
	I1202 11:46:41.804774   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:41.805224   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:41.805252   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:41.805182   23734 retry.go:31] will retry after 337.906383ms: waiting for machine to come up
	I1202 11:46:42.144697   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:42.145141   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:42.145174   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:42.145097   23734 retry.go:31] will retry after 345.416251ms: waiting for machine to come up
	I1202 11:46:42.491650   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:42.492205   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:42.492269   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:42.492187   23734 retry.go:31] will retry after 576.231118ms: waiting for machine to come up
	I1202 11:46:43.069832   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:43.070232   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:43.070258   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:43.070185   23734 retry.go:31] will retry after 484.637024ms: waiting for machine to come up
	I1202 11:46:43.557338   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:43.557918   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:43.557945   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:43.557876   23734 retry.go:31] will retry after 878.448741ms: waiting for machine to come up
	I1202 11:46:44.437501   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:44.437938   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:44.437963   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:44.437910   23734 retry.go:31] will retry after 1.136235758s: waiting for machine to come up
	I1202 11:46:45.575985   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:45.576450   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:45.576493   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:45.576415   23734 retry.go:31] will retry after 1.136366132s: waiting for machine to come up
	I1202 11:46:46.714826   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:46.715252   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:46.715280   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:46.715201   23734 retry.go:31] will retry after 1.737559308s: waiting for machine to come up
	I1202 11:46:48.455006   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:48.455487   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:48.455517   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:48.455436   23734 retry.go:31] will retry after 1.586005802s: waiting for machine to come up
	I1202 11:46:50.042947   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:50.043522   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:50.043548   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:50.043471   23734 retry.go:31] will retry after 1.94342421s: waiting for machine to come up
	I1202 11:46:51.988099   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:51.988615   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:51.988639   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:51.988575   23734 retry.go:31] will retry after 3.527601684s: waiting for machine to come up
	I1202 11:46:55.517564   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:55.518092   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:55.518121   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:55.518041   23734 retry.go:31] will retry after 3.578241105s: waiting for machine to come up
	I1202 11:46:59.097310   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:59.097631   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:59.097651   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:59.097596   23734 retry.go:31] will retry after 5.085934719s: waiting for machine to come up
	I1202 11:47:04.187907   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:04.188401   23379 main.go:141] libmachine: (ha-604935-m02) Found IP for machine: 192.168.39.96
	I1202 11:47:04.188429   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has current primary IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:04.188437   23379 main.go:141] libmachine: (ha-604935-m02) Reserving static IP address...
	I1202 11:47:04.188743   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find host DHCP lease matching {name: "ha-604935-m02", mac: "52:54:00:42:3a:28", ip: "192.168.39.96"} in network mk-ha-604935
	I1202 11:47:04.256531   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Getting to WaitForSSH function...
	I1202 11:47:04.256562   23379 main.go:141] libmachine: (ha-604935-m02) Reserved static IP address: 192.168.39.96
	I1202 11:47:04.256575   23379 main.go:141] libmachine: (ha-604935-m02) Waiting for SSH to be available...
	I1202 11:47:04.258823   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:04.259113   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935
	I1202 11:47:04.259157   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find defined IP address of network mk-ha-604935 interface with MAC address 52:54:00:42:3a:28
	I1202 11:47:04.259288   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH client type: external
	I1202 11:47:04.259308   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa (-rw-------)
	I1202 11:47:04.259373   23379 main.go:141] libmachine: (ha-604935-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:47:04.259397   23379 main.go:141] libmachine: (ha-604935-m02) DBG | About to run SSH command:
	I1202 11:47:04.259411   23379 main.go:141] libmachine: (ha-604935-m02) DBG | exit 0
	I1202 11:47:04.263986   23379 main.go:141] libmachine: (ha-604935-m02) DBG | SSH cmd err, output: exit status 255: 
	I1202 11:47:04.264009   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1202 11:47:04.264016   23379 main.go:141] libmachine: (ha-604935-m02) DBG | command : exit 0
	I1202 11:47:04.264041   23379 main.go:141] libmachine: (ha-604935-m02) DBG | err     : exit status 255
	I1202 11:47:04.264051   23379 main.go:141] libmachine: (ha-604935-m02) DBG | output  : 
	I1202 11:47:07.264654   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Getting to WaitForSSH function...
	I1202 11:47:07.266849   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.267221   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.267249   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.267406   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH client type: external
	I1202 11:47:07.267434   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa (-rw-------)
	I1202 11:47:07.267472   23379 main.go:141] libmachine: (ha-604935-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:47:07.267495   23379 main.go:141] libmachine: (ha-604935-m02) DBG | About to run SSH command:
	I1202 11:47:07.267507   23379 main.go:141] libmachine: (ha-604935-m02) DBG | exit 0
	I1202 11:47:07.391931   23379 main.go:141] libmachine: (ha-604935-m02) DBG | SSH cmd err, output: <nil>: 
	I1202 11:47:07.392120   23379 main.go:141] libmachine: (ha-604935-m02) KVM machine creation complete!
	I1202 11:47:07.392498   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetConfigRaw
	I1202 11:47:07.393039   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:07.393215   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:07.393337   23379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:47:07.393354   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetState
	I1202 11:47:07.394565   23379 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:47:07.394578   23379 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:47:07.394584   23379 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:47:07.394589   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.396709   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.397006   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.397033   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.397522   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.398890   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.399081   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.399216   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.399356   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.399544   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.399555   23379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:47:07.503380   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:47:07.503409   23379 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:47:07.503420   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.506083   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.506469   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.506502   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.506641   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.506811   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.506958   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.507087   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.507236   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.507398   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.507407   23379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:47:07.612741   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:47:07.612843   23379 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:47:07.612858   23379 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:47:07.612872   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:47:07.613105   23379 buildroot.go:166] provisioning hostname "ha-604935-m02"
	I1202 11:47:07.613126   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:47:07.613280   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.615682   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.616001   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.616029   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.616193   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.616355   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.616496   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.616615   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.616752   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.616925   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.616942   23379 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935-m02 && echo "ha-604935-m02" | sudo tee /etc/hostname
	I1202 11:47:07.739596   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935-m02
	
	I1202 11:47:07.739622   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.742125   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.742500   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.742532   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.742709   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.742872   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.743043   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.743173   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.743334   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.743539   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.743561   23379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:47:07.857236   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:47:07.857259   23379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:47:07.857284   23379 buildroot.go:174] setting up certificates
	I1202 11:47:07.857292   23379 provision.go:84] configureAuth start
	I1202 11:47:07.857300   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:47:07.857527   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:07.860095   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.860513   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.860543   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.860692   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.862585   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.862958   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.862988   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.863114   23379 provision.go:143] copyHostCerts
	I1202 11:47:07.863150   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:47:07.863186   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:47:07.863197   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:47:07.863272   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:47:07.863374   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:47:07.863401   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:47:07.863412   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:47:07.863452   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:47:07.863528   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:47:07.863553   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:47:07.863563   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:47:07.863595   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:47:07.863674   23379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935-m02 san=[127.0.0.1 192.168.39.96 ha-604935-m02 localhost minikube]
	I1202 11:47:08.103724   23379 provision.go:177] copyRemoteCerts
	I1202 11:47:08.103779   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:47:08.103802   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.106490   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.106829   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.106859   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.107025   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.107200   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.107328   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.107425   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.190303   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:47:08.190378   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:47:08.217749   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:47:08.217812   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:47:08.240576   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:47:08.240626   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:47:08.263351   23379 provision.go:87] duration metric: took 406.049409ms to configureAuth
	I1202 11:47:08.263374   23379 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:47:08.263549   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:08.263627   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.266183   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.266506   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.266542   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.266657   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.266822   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.266953   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.267045   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.267212   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:08.267440   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:08.267458   23379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:47:08.480702   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:47:08.480726   23379 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:47:08.480737   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetURL
	I1202 11:47:08.481946   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using libvirt version 6000000
	I1202 11:47:08.484074   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.484465   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.484486   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.484652   23379 main.go:141] libmachine: Docker is up and running!
	I1202 11:47:08.484665   23379 main.go:141] libmachine: Reticulating splines...
	I1202 11:47:08.484672   23379 client.go:171] duration metric: took 28.489409707s to LocalClient.Create
	I1202 11:47:08.484691   23379 start.go:167] duration metric: took 28.489467042s to libmachine.API.Create "ha-604935"
	I1202 11:47:08.484701   23379 start.go:293] postStartSetup for "ha-604935-m02" (driver="kvm2")
	I1202 11:47:08.484710   23379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:47:08.484726   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.484947   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:47:08.484979   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.487275   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.487627   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.487652   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.487763   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.487916   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.488023   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.488157   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.570418   23379 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:47:08.574644   23379 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:47:08.574668   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:47:08.574734   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:47:08.574834   23379 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:47:08.574847   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:47:08.574955   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:47:08.584296   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:47:08.607137   23379 start.go:296] duration metric: took 122.426316ms for postStartSetup
	I1202 11:47:08.607176   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetConfigRaw
	I1202 11:47:08.607688   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:08.609787   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.610122   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.610140   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.610348   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:47:08.610507   23379 start.go:128] duration metric: took 28.633177558s to createHost
	I1202 11:47:08.610528   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.612576   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.612933   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.612958   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.613094   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.613256   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.613387   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.613495   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.613675   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:08.613819   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:08.613829   23379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:47:08.721072   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733140028.701362667
	
	I1202 11:47:08.721095   23379 fix.go:216] guest clock: 1733140028.701362667
	I1202 11:47:08.721104   23379 fix.go:229] Guest: 2024-12-02 11:47:08.701362667 +0000 UTC Remote: 2024-12-02 11:47:08.610518479 +0000 UTC m=+77.169276420 (delta=90.844188ms)
	I1202 11:47:08.721123   23379 fix.go:200] guest clock delta is within tolerance: 90.844188ms
	I1202 11:47:08.721129   23379 start.go:83] releasing machines lock for "ha-604935-m02", held for 28.743964366s
	I1202 11:47:08.721146   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.721362   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:08.723610   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.723892   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.723917   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.725920   23379 out.go:177] * Found network options:
	I1202 11:47:08.727151   23379 out.go:177]   - NO_PROXY=192.168.39.102
	W1202 11:47:08.728253   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:47:08.728295   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.728718   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.728888   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.728964   23379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:47:08.729018   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	W1202 11:47:08.729077   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:47:08.729140   23379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:47:08.729159   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.731377   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.731690   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.731736   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.731757   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.731905   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.732089   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.732138   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.732161   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.732263   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.732335   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.732412   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.732482   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.732622   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.732772   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.961089   23379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:47:08.967388   23379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:47:08.967456   23379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:47:08.983898   23379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:47:08.983919   23379 start.go:495] detecting cgroup driver to use...
	I1202 11:47:08.983976   23379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:47:08.999755   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:47:09.012969   23379 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:47:09.013013   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:47:09.025774   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:47:09.038595   23379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:47:09.155525   23379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:47:09.315590   23379 docker.go:233] disabling docker service ...
	I1202 11:47:09.315645   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:47:09.329428   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:47:09.341852   23379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:47:09.455987   23379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:47:09.568119   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:47:09.581349   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:47:09.599069   23379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:47:09.599131   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.609102   23379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:47:09.609172   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.619619   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.629809   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.640881   23379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:47:09.650894   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.660662   23379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.676866   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.687794   23379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:47:09.696987   23379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:47:09.697035   23379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:47:09.709512   23379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:47:09.718617   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:47:09.833443   23379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:47:09.924039   23379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:47:09.924108   23379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:47:09.929102   23379 start.go:563] Will wait 60s for crictl version
	I1202 11:47:09.929151   23379 ssh_runner.go:195] Run: which crictl
	I1202 11:47:09.932909   23379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:47:09.970799   23379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:47:09.970857   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:47:09.997925   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:47:10.026009   23379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:47:10.027185   23379 out.go:177]   - env NO_PROXY=192.168.39.102
	I1202 11:47:10.028209   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:10.030558   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:10.030843   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:10.030865   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:10.031081   23379 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:47:10.034913   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:47:10.046993   23379 mustload.go:65] Loading cluster: ha-604935
	I1202 11:47:10.047168   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:10.047464   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:10.047509   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:10.061535   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39823
	I1202 11:47:10.061962   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:10.062500   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:10.062519   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:10.062832   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:10.062993   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:47:10.064396   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:47:10.064646   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:10.064674   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:10.078237   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I1202 11:47:10.078536   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:10.078918   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:10.078933   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:10.079205   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:10.079368   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:47:10.079517   23379 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.96
	I1202 11:47:10.079528   23379 certs.go:194] generating shared ca certs ...
	I1202 11:47:10.079548   23379 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:47:10.079686   23379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:47:10.079733   23379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:47:10.079746   23379 certs.go:256] generating profile certs ...
	I1202 11:47:10.079838   23379 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:47:10.079869   23379 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3
	I1202 11:47:10.079889   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.96 192.168.39.254]
	I1202 11:47:10.265166   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3 ...
	I1202 11:47:10.265189   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3: {Name:mkdd0b8b1421fc39bdc7a4c81c195bce0584f3e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:47:10.265365   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3 ...
	I1202 11:47:10.265383   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3: {Name:mk317f3cb02e9fefc92b2802c6865b7da9a08a71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:47:10.265473   23379 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:47:10.265636   23379 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:47:10.265813   23379 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:47:10.265832   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:47:10.265850   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:47:10.265871   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:47:10.265888   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:47:10.265904   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:47:10.265920   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:47:10.265936   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:47:10.265955   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:47:10.266021   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:47:10.266059   23379 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:47:10.266073   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:47:10.266106   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:47:10.266137   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:47:10.266166   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:47:10.266222   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:47:10.266260   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.266282   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.266301   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.266341   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:47:10.268885   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:10.269241   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:47:10.269271   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:10.269395   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:47:10.269566   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:47:10.269669   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:47:10.269777   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:47:10.344538   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 11:47:10.349538   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 11:47:10.360402   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 11:47:10.364479   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 11:47:10.374445   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 11:47:10.378811   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 11:47:10.389170   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 11:47:10.392986   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1202 11:47:10.403485   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 11:47:10.408617   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 11:47:10.418394   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 11:47:10.422245   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 11:47:10.432316   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:47:10.458960   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:47:10.483156   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:47:10.505724   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:47:10.527955   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1202 11:47:10.550812   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 11:47:10.573508   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:47:10.595760   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:47:10.618337   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:47:10.641184   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:47:10.663681   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:47:10.687678   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 11:47:10.703651   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 11:47:10.719297   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 11:47:10.734755   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1202 11:47:10.751060   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 11:47:10.767295   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 11:47:10.783201   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 11:47:10.798776   23379 ssh_runner.go:195] Run: openssl version
	I1202 11:47:10.804781   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:47:10.814853   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.819107   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.819150   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.824680   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:47:10.834444   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:47:10.847333   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.852096   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.852141   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.857456   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:47:10.867671   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:47:10.878797   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.883014   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.883050   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.888463   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:47:10.900014   23379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:47:10.903987   23379 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:47:10.904033   23379 kubeadm.go:934] updating node {m02 192.168.39.96 8443 v1.31.2 crio true true} ...
	I1202 11:47:10.904108   23379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:47:10.904143   23379 kube-vip.go:115] generating kube-vip config ...
	I1202 11:47:10.904172   23379 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:47:10.920663   23379 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:47:10.920727   23379 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:47:10.920782   23379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:47:10.929813   23379 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1202 11:47:10.929869   23379 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1202 11:47:10.938939   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1202 11:47:10.938963   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:47:10.939004   23379 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1202 11:47:10.939023   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:47:10.939098   23379 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1202 11:47:10.943516   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1202 11:47:10.943543   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1202 11:47:11.580278   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:47:11.580378   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:47:11.585380   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1202 11:47:11.585410   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1202 11:47:11.699996   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:47:11.746001   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:47:11.746098   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:47:11.755160   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1202 11:47:11.755193   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1202 11:47:12.167193   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 11:47:12.177362   23379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1202 11:47:12.193477   23379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:47:12.209277   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1202 11:47:12.225224   23379 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:47:12.229096   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:47:12.241465   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:47:12.355965   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:47:12.372721   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:47:12.373199   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:12.373246   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:12.387521   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I1202 11:47:12.387950   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:12.388471   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:12.388495   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:12.388817   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:12.389008   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:47:12.389136   23379 start.go:317] joinCluster: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:47:12.389250   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1202 11:47:12.389272   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:47:12.391559   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:12.391918   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:47:12.391947   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:12.392078   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:47:12.392244   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:47:12.392404   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:47:12.392523   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:47:12.542455   23379 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:47:12.542510   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 781q3h.dri7zuf7dlr9vool --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m02 --control-plane --apiserver-advertise-address=192.168.39.96 --apiserver-bind-port=8443"
	I1202 11:47:33.298276   23379 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 781q3h.dri7zuf7dlr9vool --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m02 --control-plane --apiserver-advertise-address=192.168.39.96 --apiserver-bind-port=8443": (20.75572497s)
	I1202 11:47:33.298324   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1202 11:47:33.868140   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-604935-m02 minikube.k8s.io/updated_at=2024_12_02T11_47_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=ha-604935 minikube.k8s.io/primary=false
	I1202 11:47:34.014505   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-604935-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1202 11:47:34.151913   23379 start.go:319] duration metric: took 21.762775302s to joinCluster
	I1202 11:47:34.151988   23379 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:47:34.152289   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:34.153405   23379 out.go:177] * Verifying Kubernetes components...
	I1202 11:47:34.154583   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:47:34.458218   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:47:34.537753   23379 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:47:34.537985   23379 kapi.go:59] client config for ha-604935: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 11:47:34.538049   23379 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1202 11:47:34.538237   23379 node_ready.go:35] waiting up to 6m0s for node "ha-604935-m02" to be "Ready" ...
	I1202 11:47:34.538328   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:34.538338   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:34.538353   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:34.538361   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:34.553164   23379 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1202 11:47:35.038636   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:35.038655   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:35.038663   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:35.038667   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:35.043410   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:35.539240   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:35.539268   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:35.539288   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:35.539295   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:35.543768   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:36.038477   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:36.038500   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:36.038510   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:36.038514   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:36.044852   23379 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1202 11:47:36.539264   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:36.539282   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:36.539291   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:36.539294   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:36.541884   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:36.542608   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:37.039323   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:37.039344   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:37.039355   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:37.039363   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:37.042762   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:37.539267   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:37.539288   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:37.539298   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:37.539302   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:37.542085   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:38.039187   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:38.039205   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:38.039213   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:38.039217   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:38.042510   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:38.538564   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:38.538590   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:38.538602   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:38.538607   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:38.543229   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:38.543842   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:39.039431   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:39.039454   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:39.039465   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:39.039470   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:39.043101   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:39.538521   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:39.538548   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:39.538559   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:39.538565   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:39.544151   23379 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:47:40.039125   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:40.039142   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:40.039150   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:40.039155   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:40.041928   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:40.539447   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:40.539466   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:40.539477   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:40.539482   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:40.542088   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:41.039165   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:41.039194   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:41.039206   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:41.039214   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:41.042019   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:41.042646   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:41.538430   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:41.538449   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:41.538456   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:41.538460   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:41.541300   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:42.038543   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:42.038564   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:42.038574   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:42.038579   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:42.042807   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:42.539123   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:42.539144   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:42.539155   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:42.539168   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:42.615775   23379 round_trippers.go:574] Response Status: 200 OK in 76 milliseconds
	I1202 11:47:43.038628   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:43.038651   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:43.038660   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:43.038670   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:43.041582   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:43.538519   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:43.538548   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:43.538559   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:43.538566   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:43.542876   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:43.543448   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:44.038473   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:44.038493   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:44.038501   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:44.038506   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:44.041916   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:44.538909   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:44.538934   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:44.538946   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:44.538954   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:44.542475   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:45.039019   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:45.039039   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:45.039046   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:45.039050   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:45.042662   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:45.539381   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:45.539404   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:45.539414   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:45.539419   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:45.543229   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:45.544177   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:46.038600   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:46.038622   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:46.038630   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:46.038635   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:46.041460   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:46.538597   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:46.538618   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:46.538628   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:46.538632   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:46.541444   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:47.038797   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:47.038817   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:47.038825   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:47.038828   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:47.041962   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:47.539440   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:47.539463   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:47.539470   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:47.539474   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:47.543115   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:48.039282   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:48.039306   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:48.039316   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:48.039320   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:48.042491   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:48.043162   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:48.539348   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:48.539372   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:48.539382   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:48.539387   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:48.542583   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:49.038466   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:49.038485   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.038493   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.038498   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.041480   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.539130   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:49.539151   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.539162   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.539166   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.542870   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:49.543570   23379 node_ready.go:49] node "ha-604935-m02" has status "Ready":"True"
	I1202 11:47:49.543589   23379 node_ready.go:38] duration metric: took 15.005336835s for node "ha-604935-m02" to be "Ready" ...
	I1202 11:47:49.543598   23379 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:47:49.543686   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:49.543695   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.543702   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.543707   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.548022   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:49.557050   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.557145   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5gcc2
	I1202 11:47:49.557159   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.557169   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.557181   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.561541   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:49.562194   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:49.562212   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.562222   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.562229   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.564378   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.564821   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:49.564836   23379 pod_ready.go:82] duration metric: took 7.7579ms for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.564845   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.564897   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-g48q9
	I1202 11:47:49.564905   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.564912   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.564919   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.566980   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.567489   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:49.567501   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.567509   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.567514   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.569545   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.570321   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:49.570337   23379 pod_ready.go:82] duration metric: took 5.482367ms for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.570346   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.570395   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935
	I1202 11:47:49.570402   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.570408   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.570416   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.572224   23379 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:47:49.572830   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:49.572845   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.572852   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.572856   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.574847   23379 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:47:49.575387   23379 pod_ready.go:93] pod "etcd-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:49.575407   23379 pod_ready.go:82] duration metric: took 5.05521ms for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.575417   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.575471   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:49.575482   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.575492   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.575497   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.577559   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.578025   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:49.578036   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.578042   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.578046   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.580244   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:50.075930   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:50.075955   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.075967   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.075972   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.078932   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:50.079644   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:50.079660   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.079671   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.079679   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.083049   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:50.576373   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:50.576396   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.576404   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.576408   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.579581   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:50.580413   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:50.580428   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.580435   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.580439   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.582674   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.075671   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:51.075692   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.075700   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.075705   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.080547   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:51.081109   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:51.081140   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.081151   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.081159   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.083775   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.084570   23379 pod_ready.go:93] pod "etcd-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.084587   23379 pod_ready.go:82] duration metric: took 1.509162413s for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.084605   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.084654   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935
	I1202 11:47:51.084661   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.084668   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.084676   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.086997   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.139895   23379 request.go:632] Waited for 52.198749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.139936   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.139941   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.139948   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.139954   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.142459   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.143143   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.143164   23379 pod_ready.go:82] duration metric: took 58.549955ms for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.143176   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.339592   23379 request.go:632] Waited for 196.342057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:47:51.339640   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:47:51.339648   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.339657   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.339665   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.342939   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:51.539862   23379 request.go:632] Waited for 196.164588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:51.539931   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:51.539935   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.539943   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.539950   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.543209   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:51.543865   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.543882   23379 pod_ready.go:82] duration metric: took 400.698772ms for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.543892   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.739144   23379 request.go:632] Waited for 195.19473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:47:51.739219   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:47:51.739235   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.739245   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.739249   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.741900   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.940184   23379 request.go:632] Waited for 197.361013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.940269   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.940278   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.940285   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.940289   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.943128   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.943706   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.943727   23379 pod_ready.go:82] duration metric: took 399.828238ms for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.943741   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.139832   23379 request.go:632] Waited for 196.024828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:47:52.139897   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:47:52.139908   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.139915   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.139922   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.143273   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:52.339296   23379 request.go:632] Waited for 195.254025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:52.339366   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:52.339382   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.339392   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.339396   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.343086   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:52.343632   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:52.343651   23379 pod_ready.go:82] duration metric: took 399.901549ms for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.343664   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.540119   23379 request.go:632] Waited for 196.382954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:47:52.540208   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:47:52.540223   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.540246   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.540254   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.544789   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:52.739964   23379 request.go:632] Waited for 194.383281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:52.740029   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:52.740036   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.740047   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.740056   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.744675   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:52.745274   23379 pod_ready.go:93] pod "kube-proxy-tqcb6" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:52.745291   23379 pod_ready.go:82] duration metric: took 401.620034ms for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.745302   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.939398   23379 request.go:632] Waited for 194.014981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:47:52.939448   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:47:52.939453   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.939460   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.939466   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.942473   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:53.139562   23379 request.go:632] Waited for 196.368019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.139626   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.139631   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.139639   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.139642   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.142786   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.143361   23379 pod_ready.go:93] pod "kube-proxy-w9r4x" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:53.143382   23379 pod_ready.go:82] duration metric: took 398.068666ms for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.143391   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.339501   23379 request.go:632] Waited for 196.04496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:47:53.339586   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:47:53.339596   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.339607   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.339618   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.343080   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.540159   23379 request.go:632] Waited for 196.184742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:53.540226   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:53.540246   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.540255   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.540261   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.543534   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.544454   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:53.544479   23379 pod_ready.go:82] duration metric: took 401.077052ms for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.544494   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.739453   23379 request.go:632] Waited for 194.878612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:47:53.739540   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:47:53.739557   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.739572   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.739583   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.743318   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.939180   23379 request.go:632] Waited for 195.280753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.939245   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.939250   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.939258   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.939265   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.943381   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:53.944067   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:53.944085   23379 pod_ready.go:82] duration metric: took 399.577551ms for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.944099   23379 pod_ready.go:39] duration metric: took 4.40047197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:47:53.944119   23379 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:47:53.944173   23379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:47:53.960762   23379 api_server.go:72] duration metric: took 19.808744771s to wait for apiserver process to appear ...
	I1202 11:47:53.960781   23379 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:47:53.960802   23379 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1202 11:47:53.965634   23379 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1202 11:47:53.965695   23379 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1202 11:47:53.965706   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.965717   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.965727   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.966539   23379 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1202 11:47:53.966644   23379 api_server.go:141] control plane version: v1.31.2
	I1202 11:47:53.966664   23379 api_server.go:131] duration metric: took 5.87665ms to wait for apiserver health ...
	I1202 11:47:53.966674   23379 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:47:54.140116   23379 request.go:632] Waited for 173.370822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.140184   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.140192   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.140203   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.140213   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.144688   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:54.150151   23379 system_pods.go:59] 17 kube-system pods found
	I1202 11:47:54.150175   23379 system_pods.go:61] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:47:54.150180   23379 system_pods.go:61] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:47:54.150184   23379 system_pods.go:61] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:47:54.150187   23379 system_pods.go:61] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:47:54.150190   23379 system_pods.go:61] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:47:54.150193   23379 system_pods.go:61] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:47:54.150196   23379 system_pods.go:61] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:47:54.150200   23379 system_pods.go:61] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:47:54.150204   23379 system_pods.go:61] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:47:54.150208   23379 system_pods.go:61] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:47:54.150213   23379 system_pods.go:61] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:47:54.150216   23379 system_pods.go:61] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:47:54.150222   23379 system_pods.go:61] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:47:54.150225   23379 system_pods.go:61] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:47:54.150228   23379 system_pods.go:61] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:47:54.150230   23379 system_pods.go:61] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:47:54.150234   23379 system_pods.go:61] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:47:54.150239   23379 system_pods.go:74] duration metric: took 183.556674ms to wait for pod list to return data ...
	I1202 11:47:54.150248   23379 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:47:54.339686   23379 request.go:632] Waited for 189.36849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:47:54.339740   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:47:54.339744   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.339751   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.339755   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.343135   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:54.343361   23379 default_sa.go:45] found service account: "default"
	I1202 11:47:54.343386   23379 default_sa.go:55] duration metric: took 193.131705ms for default service account to be created ...
	I1202 11:47:54.343397   23379 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:47:54.539835   23379 request.go:632] Waited for 196.371965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.539931   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.539943   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.539954   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.539964   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.544943   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:54.550739   23379 system_pods.go:86] 17 kube-system pods found
	I1202 11:47:54.550763   23379 system_pods.go:89] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:47:54.550769   23379 system_pods.go:89] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:47:54.550775   23379 system_pods.go:89] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:47:54.550778   23379 system_pods.go:89] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:47:54.550809   23379 system_pods.go:89] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:47:54.550819   23379 system_pods.go:89] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:47:54.550824   23379 system_pods.go:89] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:47:54.550829   23379 system_pods.go:89] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:47:54.550833   23379 system_pods.go:89] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:47:54.550837   23379 system_pods.go:89] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:47:54.550841   23379 system_pods.go:89] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:47:54.550848   23379 system_pods.go:89] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:47:54.550852   23379 system_pods.go:89] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:47:54.550857   23379 system_pods.go:89] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:47:54.550862   23379 system_pods.go:89] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:47:54.550867   23379 system_pods.go:89] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:47:54.550870   23379 system_pods.go:89] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:47:54.550878   23379 system_pods.go:126] duration metric: took 207.476252ms to wait for k8s-apps to be running ...
	I1202 11:47:54.550887   23379 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:47:54.550927   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:47:54.567143   23379 system_svc.go:56] duration metric: took 16.250371ms WaitForService to wait for kubelet
	I1202 11:47:54.567163   23379 kubeadm.go:582] duration metric: took 20.415147049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:47:54.567180   23379 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:47:54.739589   23379 request.go:632] Waited for 172.338353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1202 11:47:54.739668   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1202 11:47:54.739675   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.739683   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.739688   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.743346   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:54.744125   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:47:54.744152   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:47:54.744165   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:47:54.744170   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:47:54.744177   23379 node_conditions.go:105] duration metric: took 176.990456ms to run NodePressure ...
	I1202 11:47:54.744190   23379 start.go:241] waiting for startup goroutines ...
	I1202 11:47:54.744223   23379 start.go:255] writing updated cluster config ...
	I1202 11:47:54.746253   23379 out.go:201] 
	I1202 11:47:54.747593   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:54.747718   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:47:54.749358   23379 out.go:177] * Starting "ha-604935-m03" control-plane node in "ha-604935" cluster
	I1202 11:47:54.750410   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:47:54.750433   23379 cache.go:56] Caching tarball of preloaded images
	I1202 11:47:54.750533   23379 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:47:54.750548   23379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:47:54.750643   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:47:54.750878   23379 start.go:360] acquireMachinesLock for ha-604935-m03: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:47:54.750923   23379 start.go:364] duration metric: took 26.206µs to acquireMachinesLock for "ha-604935-m03"
	I1202 11:47:54.750944   23379 start.go:93] Provisioning new machine with config: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:47:54.751067   23379 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1202 11:47:54.752864   23379 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 11:47:54.752946   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:54.752986   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:54.767584   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I1202 11:47:54.767916   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:54.768481   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:54.768505   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:54.768819   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:54.768991   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:47:54.769125   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:47:54.769335   23379 start.go:159] libmachine.API.Create for "ha-604935" (driver="kvm2")
	I1202 11:47:54.769376   23379 client.go:168] LocalClient.Create starting
	I1202 11:47:54.769409   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:47:54.769445   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:47:54.769469   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:47:54.769535   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:47:54.769563   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:47:54.769581   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:47:54.769610   23379 main.go:141] libmachine: Running pre-create checks...
	I1202 11:47:54.769622   23379 main.go:141] libmachine: (ha-604935-m03) Calling .PreCreateCheck
	I1202 11:47:54.769820   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetConfigRaw
	I1202 11:47:54.770184   23379 main.go:141] libmachine: Creating machine...
	I1202 11:47:54.770198   23379 main.go:141] libmachine: (ha-604935-m03) Calling .Create
	I1202 11:47:54.770317   23379 main.go:141] libmachine: (ha-604935-m03) Creating KVM machine...
	I1202 11:47:54.771476   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found existing default KVM network
	I1202 11:47:54.771588   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found existing private KVM network mk-ha-604935
	I1202 11:47:54.771715   23379 main.go:141] libmachine: (ha-604935-m03) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03 ...
	I1202 11:47:54.771731   23379 main.go:141] libmachine: (ha-604935-m03) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:47:54.771824   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:54.771717   24139 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:47:54.771925   23379 main.go:141] libmachine: (ha-604935-m03) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:47:55.025734   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:55.025618   24139 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa...
	I1202 11:47:55.125359   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:55.125265   24139 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/ha-604935-m03.rawdisk...
	I1202 11:47:55.125386   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Writing magic tar header
	I1202 11:47:55.125397   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Writing SSH key tar header
	I1202 11:47:55.125407   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:55.125384   24139 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03 ...
	I1202 11:47:55.125541   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03
	I1202 11:47:55.125572   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:47:55.125586   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03 (perms=drwx------)
	I1202 11:47:55.125605   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:47:55.125622   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:47:55.125634   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:47:55.125649   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:47:55.125663   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:47:55.125683   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:47:55.125697   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:47:55.125710   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:47:55.125719   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home
	I1202 11:47:55.125733   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:47:55.125745   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Skipping /home - not owner
	I1202 11:47:55.125754   23379 main.go:141] libmachine: (ha-604935-m03) Creating domain...
	I1202 11:47:55.126629   23379 main.go:141] libmachine: (ha-604935-m03) define libvirt domain using xml: 
	I1202 11:47:55.126649   23379 main.go:141] libmachine: (ha-604935-m03) <domain type='kvm'>
	I1202 11:47:55.126659   23379 main.go:141] libmachine: (ha-604935-m03)   <name>ha-604935-m03</name>
	I1202 11:47:55.126667   23379 main.go:141] libmachine: (ha-604935-m03)   <memory unit='MiB'>2200</memory>
	I1202 11:47:55.126675   23379 main.go:141] libmachine: (ha-604935-m03)   <vcpu>2</vcpu>
	I1202 11:47:55.126685   23379 main.go:141] libmachine: (ha-604935-m03)   <features>
	I1202 11:47:55.126693   23379 main.go:141] libmachine: (ha-604935-m03)     <acpi/>
	I1202 11:47:55.126701   23379 main.go:141] libmachine: (ha-604935-m03)     <apic/>
	I1202 11:47:55.126706   23379 main.go:141] libmachine: (ha-604935-m03)     <pae/>
	I1202 11:47:55.126709   23379 main.go:141] libmachine: (ha-604935-m03)     
	I1202 11:47:55.126714   23379 main.go:141] libmachine: (ha-604935-m03)   </features>
	I1202 11:47:55.126721   23379 main.go:141] libmachine: (ha-604935-m03)   <cpu mode='host-passthrough'>
	I1202 11:47:55.126745   23379 main.go:141] libmachine: (ha-604935-m03)   
	I1202 11:47:55.126763   23379 main.go:141] libmachine: (ha-604935-m03)   </cpu>
	I1202 11:47:55.126773   23379 main.go:141] libmachine: (ha-604935-m03)   <os>
	I1202 11:47:55.126780   23379 main.go:141] libmachine: (ha-604935-m03)     <type>hvm</type>
	I1202 11:47:55.126791   23379 main.go:141] libmachine: (ha-604935-m03)     <boot dev='cdrom'/>
	I1202 11:47:55.126796   23379 main.go:141] libmachine: (ha-604935-m03)     <boot dev='hd'/>
	I1202 11:47:55.126808   23379 main.go:141] libmachine: (ha-604935-m03)     <bootmenu enable='no'/>
	I1202 11:47:55.126817   23379 main.go:141] libmachine: (ha-604935-m03)   </os>
	I1202 11:47:55.126827   23379 main.go:141] libmachine: (ha-604935-m03)   <devices>
	I1202 11:47:55.126837   23379 main.go:141] libmachine: (ha-604935-m03)     <disk type='file' device='cdrom'>
	I1202 11:47:55.126849   23379 main.go:141] libmachine: (ha-604935-m03)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/boot2docker.iso'/>
	I1202 11:47:55.126860   23379 main.go:141] libmachine: (ha-604935-m03)       <target dev='hdc' bus='scsi'/>
	I1202 11:47:55.126869   23379 main.go:141] libmachine: (ha-604935-m03)       <readonly/>
	I1202 11:47:55.126878   23379 main.go:141] libmachine: (ha-604935-m03)     </disk>
	I1202 11:47:55.126888   23379 main.go:141] libmachine: (ha-604935-m03)     <disk type='file' device='disk'>
	I1202 11:47:55.126904   23379 main.go:141] libmachine: (ha-604935-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:47:55.126929   23379 main.go:141] libmachine: (ha-604935-m03)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/ha-604935-m03.rawdisk'/>
	I1202 11:47:55.126949   23379 main.go:141] libmachine: (ha-604935-m03)       <target dev='hda' bus='virtio'/>
	I1202 11:47:55.126958   23379 main.go:141] libmachine: (ha-604935-m03)     </disk>
	I1202 11:47:55.126972   23379 main.go:141] libmachine: (ha-604935-m03)     <interface type='network'>
	I1202 11:47:55.126984   23379 main.go:141] libmachine: (ha-604935-m03)       <source network='mk-ha-604935'/>
	I1202 11:47:55.126990   23379 main.go:141] libmachine: (ha-604935-m03)       <model type='virtio'/>
	I1202 11:47:55.127001   23379 main.go:141] libmachine: (ha-604935-m03)     </interface>
	I1202 11:47:55.127011   23379 main.go:141] libmachine: (ha-604935-m03)     <interface type='network'>
	I1202 11:47:55.127022   23379 main.go:141] libmachine: (ha-604935-m03)       <source network='default'/>
	I1202 11:47:55.127039   23379 main.go:141] libmachine: (ha-604935-m03)       <model type='virtio'/>
	I1202 11:47:55.127046   23379 main.go:141] libmachine: (ha-604935-m03)     </interface>
	I1202 11:47:55.127054   23379 main.go:141] libmachine: (ha-604935-m03)     <serial type='pty'>
	I1202 11:47:55.127059   23379 main.go:141] libmachine: (ha-604935-m03)       <target port='0'/>
	I1202 11:47:55.127065   23379 main.go:141] libmachine: (ha-604935-m03)     </serial>
	I1202 11:47:55.127070   23379 main.go:141] libmachine: (ha-604935-m03)     <console type='pty'>
	I1202 11:47:55.127080   23379 main.go:141] libmachine: (ha-604935-m03)       <target type='serial' port='0'/>
	I1202 11:47:55.127089   23379 main.go:141] libmachine: (ha-604935-m03)     </console>
	I1202 11:47:55.127100   23379 main.go:141] libmachine: (ha-604935-m03)     <rng model='virtio'>
	I1202 11:47:55.127112   23379 main.go:141] libmachine: (ha-604935-m03)       <backend model='random'>/dev/random</backend>
	I1202 11:47:55.127125   23379 main.go:141] libmachine: (ha-604935-m03)     </rng>
	I1202 11:47:55.127130   23379 main.go:141] libmachine: (ha-604935-m03)     
	I1202 11:47:55.127136   23379 main.go:141] libmachine: (ha-604935-m03)     
	I1202 11:47:55.127141   23379 main.go:141] libmachine: (ha-604935-m03)   </devices>
	I1202 11:47:55.127147   23379 main.go:141] libmachine: (ha-604935-m03) </domain>
	I1202 11:47:55.127154   23379 main.go:141] libmachine: (ha-604935-m03) 
	I1202 11:47:55.134362   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:04:31:c3 in network default
	I1202 11:47:55.134940   23379 main.go:141] libmachine: (ha-604935-m03) Ensuring networks are active...
	I1202 11:47:55.134970   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:55.135700   23379 main.go:141] libmachine: (ha-604935-m03) Ensuring network default is active
	I1202 11:47:55.135994   23379 main.go:141] libmachine: (ha-604935-m03) Ensuring network mk-ha-604935 is active
	I1202 11:47:55.136395   23379 main.go:141] libmachine: (ha-604935-m03) Getting domain xml...
	I1202 11:47:55.137154   23379 main.go:141] libmachine: (ha-604935-m03) Creating domain...
	I1202 11:47:56.327343   23379 main.go:141] libmachine: (ha-604935-m03) Waiting to get IP...
	I1202 11:47:56.328051   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:56.328532   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:56.328560   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:56.328490   24139 retry.go:31] will retry after 245.534512ms: waiting for machine to come up
	I1202 11:47:56.575853   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:56.576344   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:56.576361   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:56.576322   24139 retry.go:31] will retry after 318.961959ms: waiting for machine to come up
	I1202 11:47:56.897058   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:56.897590   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:56.897617   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:56.897539   24139 retry.go:31] will retry after 408.54179ms: waiting for machine to come up
	I1202 11:47:57.308040   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:57.308434   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:57.308462   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:57.308386   24139 retry.go:31] will retry after 402.803745ms: waiting for machine to come up
	I1202 11:47:57.713046   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:57.713543   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:57.713570   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:57.713486   24139 retry.go:31] will retry after 579.226055ms: waiting for machine to come up
	I1202 11:47:58.294078   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:58.294470   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:58.294499   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:58.294431   24139 retry.go:31] will retry after 896.930274ms: waiting for machine to come up
	I1202 11:47:59.192283   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:59.192647   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:59.192676   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:59.192594   24139 retry.go:31] will retry after 885.008169ms: waiting for machine to come up
	I1202 11:48:00.078944   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:00.079402   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:00.079429   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:00.079369   24139 retry.go:31] will retry after 1.252859053s: waiting for machine to come up
	I1202 11:48:01.333237   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:01.333651   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:01.333686   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:01.333595   24139 retry.go:31] will retry after 1.614324315s: waiting for machine to come up
	I1202 11:48:02.949128   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:02.949536   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:02.949565   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:02.949508   24139 retry.go:31] will retry after 1.812710836s: waiting for machine to come up
	I1202 11:48:04.763946   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:04.764375   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:04.764406   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:04.764323   24139 retry.go:31] will retry after 2.067204627s: waiting for machine to come up
	I1202 11:48:06.833288   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:06.833665   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:06.833688   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:06.833637   24139 retry.go:31] will retry after 2.307525128s: waiting for machine to come up
	I1202 11:48:09.144169   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:09.144572   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:09.144593   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:09.144528   24139 retry.go:31] will retry after 3.498536479s: waiting for machine to come up
	I1202 11:48:12.646257   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:12.646634   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:12.646662   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:12.646585   24139 retry.go:31] will retry after 4.180840958s: waiting for machine to come up
	I1202 11:48:16.830266   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.830741   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has current primary IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.830768   23379 main.go:141] libmachine: (ha-604935-m03) Found IP for machine: 192.168.39.211
	I1202 11:48:16.830807   23379 main.go:141] libmachine: (ha-604935-m03) Reserving static IP address...
	I1202 11:48:16.831141   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find host DHCP lease matching {name: "ha-604935-m03", mac: "52:54:00:56:c4:59", ip: "192.168.39.211"} in network mk-ha-604935
	I1202 11:48:16.902131   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Getting to WaitForSSH function...
	I1202 11:48:16.902164   23379 main.go:141] libmachine: (ha-604935-m03) Reserved static IP address: 192.168.39.211
	I1202 11:48:16.902173   23379 main.go:141] libmachine: (ha-604935-m03) Waiting for SSH to be available...
	I1202 11:48:16.905075   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.905526   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:16.905551   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.905741   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Using SSH client type: external
	I1202 11:48:16.905772   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa (-rw-------)
	I1202 11:48:16.905800   23379 main.go:141] libmachine: (ha-604935-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:48:16.905820   23379 main.go:141] libmachine: (ha-604935-m03) DBG | About to run SSH command:
	I1202 11:48:16.905851   23379 main.go:141] libmachine: (ha-604935-m03) DBG | exit 0
	I1202 11:48:17.032533   23379 main.go:141] libmachine: (ha-604935-m03) DBG | SSH cmd err, output: <nil>: 
	I1202 11:48:17.032776   23379 main.go:141] libmachine: (ha-604935-m03) KVM machine creation complete!
	I1202 11:48:17.033131   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetConfigRaw
	I1202 11:48:17.033671   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:17.033865   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:17.034018   23379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:48:17.034033   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetState
	I1202 11:48:17.035293   23379 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:48:17.035305   23379 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:48:17.035310   23379 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:48:17.035315   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.037352   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.037741   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.037774   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.037900   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.038083   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.038238   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.038381   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.038530   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.038713   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.038724   23379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:48:17.143327   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:48:17.143352   23379 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:48:17.143372   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.146175   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.146516   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.146548   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.146646   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.146838   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.146983   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.147108   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.147258   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.147425   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.147438   23379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:48:17.253131   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:48:17.253218   23379 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:48:17.253233   23379 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:48:17.253245   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:48:17.253510   23379 buildroot.go:166] provisioning hostname "ha-604935-m03"
	I1202 11:48:17.253537   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:48:17.253707   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.256428   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.256774   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.256796   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.256946   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.257116   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.257249   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.257377   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.257504   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.257691   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.257703   23379 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935-m03 && echo "ha-604935-m03" | sudo tee /etc/hostname
	I1202 11:48:17.375185   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935-m03
	
	I1202 11:48:17.375210   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.377667   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.378038   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.378062   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.378264   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.378483   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.378634   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.378780   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.378929   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.379106   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.379136   23379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:48:17.496248   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:48:17.496279   23379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:48:17.496297   23379 buildroot.go:174] setting up certificates
	I1202 11:48:17.496309   23379 provision.go:84] configureAuth start
	I1202 11:48:17.496322   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:48:17.496560   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:17.499486   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.499912   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.499947   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.500094   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.502337   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.502712   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.502737   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.502856   23379 provision.go:143] copyHostCerts
	I1202 11:48:17.502886   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:48:17.502931   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:48:17.502944   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:48:17.503023   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:48:17.503097   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:48:17.503116   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:48:17.503123   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:48:17.503148   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:48:17.503191   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:48:17.503207   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:48:17.503214   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:48:17.503234   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:48:17.503299   23379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935-m03 san=[127.0.0.1 192.168.39.211 ha-604935-m03 localhost minikube]
	I1202 11:48:17.587852   23379 provision.go:177] copyRemoteCerts
	I1202 11:48:17.587906   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:48:17.587927   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.590598   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.590995   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.591015   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.591197   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.591367   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.591543   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.591679   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:17.674221   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:48:17.674296   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:48:17.698597   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:48:17.698660   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:48:17.723039   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:48:17.723097   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:48:17.747396   23379 provision.go:87] duration metric: took 251.076751ms to configureAuth
	I1202 11:48:17.747416   23379 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:48:17.747635   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:48:17.747715   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.750670   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.751052   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.751081   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.751262   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.751452   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.751599   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.751748   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.751905   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.752098   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.752117   23379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:48:17.976945   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:48:17.976975   23379 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:48:17.976987   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetURL
	I1202 11:48:17.978227   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Using libvirt version 6000000
	I1202 11:48:17.980581   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.980959   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.980987   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.981117   23379 main.go:141] libmachine: Docker is up and running!
	I1202 11:48:17.981135   23379 main.go:141] libmachine: Reticulating splines...
	I1202 11:48:17.981143   23379 client.go:171] duration metric: took 23.211756514s to LocalClient.Create
	I1202 11:48:17.981168   23379 start.go:167] duration metric: took 23.211833697s to libmachine.API.Create "ha-604935"
	I1202 11:48:17.981181   23379 start.go:293] postStartSetup for "ha-604935-m03" (driver="kvm2")
	I1202 11:48:17.981196   23379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:48:17.981223   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:17.981429   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:48:17.981453   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.983470   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.983816   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.983841   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.983966   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.984144   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.984312   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.984449   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:18.067334   23379 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:48:18.072037   23379 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:48:18.072060   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:48:18.072140   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:48:18.072226   23379 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:48:18.072251   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:48:18.072352   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:48:18.083182   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:48:18.110045   23379 start.go:296] duration metric: took 128.848906ms for postStartSetup
	I1202 11:48:18.110090   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetConfigRaw
	I1202 11:48:18.110693   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:18.113273   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.113636   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.113656   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.113891   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:48:18.114175   23379 start.go:128] duration metric: took 23.363096022s to createHost
	I1202 11:48:18.114201   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:18.116660   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.116982   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.117010   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.117166   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:18.117378   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.117545   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.117689   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:18.117845   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:18.118040   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:18.118051   23379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:48:18.225174   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733140098.198364061
	
	I1202 11:48:18.225197   23379 fix.go:216] guest clock: 1733140098.198364061
	I1202 11:48:18.225206   23379 fix.go:229] Guest: 2024-12-02 11:48:18.198364061 +0000 UTC Remote: 2024-12-02 11:48:18.114189112 +0000 UTC m=+146.672947053 (delta=84.174949ms)
	I1202 11:48:18.225226   23379 fix.go:200] guest clock delta is within tolerance: 84.174949ms
	I1202 11:48:18.225232   23379 start.go:83] releasing machines lock for "ha-604935-m03", held for 23.474299783s
	I1202 11:48:18.225255   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.225523   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:18.228223   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.228665   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.228698   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.231057   23379 out.go:177] * Found network options:
	I1202 11:48:18.232381   23379 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.96
	W1202 11:48:18.233581   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	W1202 11:48:18.233602   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:48:18.233614   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.234079   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.234244   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.234317   23379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:48:18.234369   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	W1202 11:48:18.234421   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	W1202 11:48:18.234435   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:48:18.234477   23379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:48:18.234492   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:18.237268   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.237547   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.237709   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.237734   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.237883   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:18.237989   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.238016   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.238057   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.238152   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:18.238220   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:18.238300   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.238378   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:18.238455   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:18.238579   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:18.473317   23379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:48:18.479920   23379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:48:18.479984   23379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:48:18.496983   23379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:48:18.497001   23379 start.go:495] detecting cgroup driver to use...
	I1202 11:48:18.497065   23379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:48:18.513241   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:48:18.527410   23379 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:48:18.527466   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:48:18.541725   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:48:18.557008   23379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:48:18.688718   23379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:48:18.852643   23379 docker.go:233] disabling docker service ...
	I1202 11:48:18.852707   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:48:18.868163   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:48:18.881925   23379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:48:19.017240   23379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:48:19.151423   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:48:19.165081   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:48:19.183322   23379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:48:19.183382   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.193996   23379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:48:19.194053   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.204159   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.214125   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.224009   23379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:48:19.234581   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.244825   23379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.261368   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.270942   23379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:48:19.279793   23379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:48:19.279828   23379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:48:19.292711   23379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:48:19.302043   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:48:19.426581   23379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:48:19.517813   23379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:48:19.517869   23379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:48:19.523046   23379 start.go:563] Will wait 60s for crictl version
	I1202 11:48:19.523100   23379 ssh_runner.go:195] Run: which crictl
	I1202 11:48:19.526693   23379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:48:19.569077   23379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:48:19.569154   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:48:19.606184   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:48:19.639221   23379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:48:19.640557   23379 out.go:177]   - env NO_PROXY=192.168.39.102
	I1202 11:48:19.641750   23379 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.96
	I1202 11:48:19.642878   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:19.645504   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:19.645963   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:19.645990   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:19.646180   23379 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:48:19.650508   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:48:19.664882   23379 mustload.go:65] Loading cluster: ha-604935
	I1202 11:48:19.665139   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:48:19.665497   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:48:19.665538   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:48:19.680437   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1202 11:48:19.680830   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:48:19.681262   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:48:19.681286   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:48:19.681575   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:48:19.681746   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:48:19.683191   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:48:19.683564   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:48:19.683606   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:48:19.697831   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I1202 11:48:19.698152   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:48:19.698542   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:48:19.698559   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:48:19.698845   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:48:19.699001   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:48:19.699166   23379 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.211
	I1202 11:48:19.699179   23379 certs.go:194] generating shared ca certs ...
	I1202 11:48:19.699197   23379 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:48:19.699318   23379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:48:19.699355   23379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:48:19.699364   23379 certs.go:256] generating profile certs ...
	I1202 11:48:19.699432   23379 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:48:19.699455   23379 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864
	I1202 11:48:19.699468   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.96 192.168.39.211 192.168.39.254]
	I1202 11:48:19.775540   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864 ...
	I1202 11:48:19.775561   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864: {Name:mk862a073739ee2a78cf9f81a3258f4be6a2f692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:48:19.775718   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864 ...
	I1202 11:48:19.775732   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864: {Name:mk2b946b8deaf42e144aacb0aeac107c1e5e5346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:48:19.775826   23379 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:48:19.775947   23379 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:48:19.776063   23379 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:48:19.776077   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:48:19.776089   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:48:19.776102   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:48:19.776114   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:48:19.776131   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:48:19.776145   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:48:19.776157   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:48:19.800328   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:48:19.800402   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:48:19.800434   23379 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:48:19.800443   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:48:19.800467   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:48:19.800488   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:48:19.800508   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:48:19.800550   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:48:19.800576   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:19.800589   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:48:19.800601   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:48:19.800629   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:48:19.803275   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:19.803700   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:48:19.803723   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:19.803908   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:48:19.804099   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:48:19.804214   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:48:19.804377   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:48:19.880485   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 11:48:19.886022   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 11:48:19.898728   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 11:48:19.903305   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 11:48:19.914871   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 11:48:19.919141   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 11:48:19.929566   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 11:48:19.933478   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1202 11:48:19.943613   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 11:48:19.948089   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 11:48:19.958895   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 11:48:19.964303   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 11:48:19.977617   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:48:20.002994   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:48:20.029806   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:48:20.053441   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:48:20.076846   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1202 11:48:20.100859   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 11:48:20.123816   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:48:20.147882   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:48:20.170789   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:48:20.194677   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:48:20.217677   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:48:20.242059   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 11:48:20.259613   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 11:48:20.277187   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 11:48:20.294496   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1202 11:48:20.311183   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 11:48:20.328629   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 11:48:20.347609   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 11:48:20.365780   23379 ssh_runner.go:195] Run: openssl version
	I1202 11:48:20.371782   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:48:20.383879   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:20.388524   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:20.388568   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:20.394674   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:48:20.407273   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:48:20.419450   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:48:20.424025   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:48:20.424067   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:48:20.429730   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:48:20.440110   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:48:20.451047   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:48:20.456468   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:48:20.456512   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:48:20.462924   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:48:20.474358   23379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:48:20.478447   23379 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:48:20.478499   23379 kubeadm.go:934] updating node {m03 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1202 11:48:20.478603   23379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:48:20.478639   23379 kube-vip.go:115] generating kube-vip config ...
	I1202 11:48:20.478678   23379 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:48:20.496205   23379 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:48:20.496274   23379 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:48:20.496312   23379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:48:20.507618   23379 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1202 11:48:20.507658   23379 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1202 11:48:20.517119   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1202 11:48:20.517130   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1202 11:48:20.517161   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:48:20.517164   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:48:20.517126   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1202 11:48:20.517219   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:48:20.517234   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:48:20.517303   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:48:20.534132   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:48:20.534202   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:48:20.534220   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1202 11:48:20.534247   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1202 11:48:20.534296   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1202 11:48:20.534330   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1202 11:48:20.553870   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1202 11:48:20.553896   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1202 11:48:21.369626   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 11:48:21.380201   23379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1202 11:48:21.397686   23379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:48:21.414134   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1202 11:48:21.430962   23379 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:48:21.434795   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:48:21.446707   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:48:21.575648   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:48:21.592190   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:48:21.592653   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:48:21.592702   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:48:21.607602   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I1202 11:48:21.608034   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:48:21.608505   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:48:21.608523   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:48:21.608871   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:48:21.609064   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:48:21.609215   23379 start.go:317] joinCluster: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:48:21.609330   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1202 11:48:21.609352   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:48:21.612246   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:21.612678   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:48:21.612705   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:21.612919   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:48:21.613101   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:48:21.613260   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:48:21.613431   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:48:21.802258   23379 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:48:21.802311   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oi1g5f.7vg9nzzhmrri7fzl --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443"
	I1202 11:48:44.058534   23379 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oi1g5f.7vg9nzzhmrri7fzl --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443": (22.25619815s)
	I1202 11:48:44.058574   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1202 11:48:44.589392   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-604935-m03 minikube.k8s.io/updated_at=2024_12_02T11_48_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=ha-604935 minikube.k8s.io/primary=false
	I1202 11:48:44.754182   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-604935-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1202 11:48:44.876509   23379 start.go:319] duration metric: took 23.267291972s to joinCluster
	I1202 11:48:44.876583   23379 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:48:44.876929   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:48:44.877896   23379 out.go:177] * Verifying Kubernetes components...
	I1202 11:48:44.879178   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:48:45.205771   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:48:45.227079   23379 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:48:45.227379   23379 kapi.go:59] client config for ha-604935: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 11:48:45.227437   23379 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1202 11:48:45.227646   23379 node_ready.go:35] waiting up to 6m0s for node "ha-604935-m03" to be "Ready" ...
	I1202 11:48:45.227731   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:45.227739   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:45.227750   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:45.227760   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:45.230602   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:45.728816   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:45.728844   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:45.728856   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:45.728862   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:45.732325   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:46.228808   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:46.228838   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:46.228847   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:46.228855   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:46.232971   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:48:46.728246   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:46.728266   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:46.728275   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:46.728278   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:46.731578   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:47.228275   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:47.228293   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:47.228302   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:47.228305   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:47.231235   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:47.231687   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:47.728543   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:47.728564   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:47.728575   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:47.728580   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:47.731725   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:48.228100   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:48.228126   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:48.228134   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:48.228139   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:48.231200   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:48.727927   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:48.727953   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:48.727965   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:48.727971   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:48.731841   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:49.228251   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:49.228277   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:49.228288   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:49.228295   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:49.231887   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:49.232816   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:49.728539   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:49.728558   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:49.728567   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:49.728578   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:49.731618   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:50.228164   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:50.228182   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:50.228190   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:50.228194   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:50.231677   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:50.728841   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:50.728865   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:50.728877   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:50.728884   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:50.731790   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:51.227844   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:51.227875   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:51.227882   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:51.227886   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:51.231092   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:51.728369   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:51.728389   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:51.728397   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:51.728402   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:51.731512   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:51.732161   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:52.228555   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:52.228577   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:52.228585   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:52.228590   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:52.232624   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:48:52.727915   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:52.727935   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:52.727942   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:52.727946   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:52.731213   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:53.228361   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:53.228382   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:53.228389   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:53.228392   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:53.233382   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:48:53.728248   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:53.728268   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:53.728276   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:53.728280   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:53.731032   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:54.228383   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:54.228402   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:54.228409   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:54.228414   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:54.231567   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:54.232182   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:54.728033   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:54.728054   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:54.728070   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:54.728078   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:54.731003   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:55.227931   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:55.227952   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:55.227959   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:55.227963   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:55.231124   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:55.728257   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:55.728282   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:55.728295   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:55.728302   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:55.731469   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:56.228616   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:56.228634   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:56.228642   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:56.228648   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:56.231749   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:56.232413   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:56.728627   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:56.728662   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:56.728672   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:56.728679   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:56.731199   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:57.228073   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:57.228095   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:57.228106   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:57.228112   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:57.231071   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:57.728355   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:57.728374   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:57.728386   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:57.728390   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:57.732053   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:58.228692   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:58.228716   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:58.228725   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:58.228731   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:58.231871   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:58.232534   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:58.727842   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:58.727867   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:58.727888   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:58.727893   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:58.730412   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:59.228495   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:59.228515   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:59.228522   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:59.228525   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:59.232497   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:59.728247   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:59.728264   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:59.728272   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:59.728275   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:59.731212   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.227900   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:00.227922   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.227929   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.227932   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.232057   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:00.233141   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:49:00.728080   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:00.728104   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.728116   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.728123   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.730928   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.731736   23379 node_ready.go:49] node "ha-604935-m03" has status "Ready":"True"
	I1202 11:49:00.731754   23379 node_ready.go:38] duration metric: took 15.50409308s for node "ha-604935-m03" to be "Ready" ...
	I1202 11:49:00.731762   23379 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:49:00.731812   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:00.731821   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.731828   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.731833   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.737119   23379 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:49:00.743811   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.743881   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5gcc2
	I1202 11:49:00.743889   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.743896   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.743900   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.746447   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.747270   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:00.747288   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.747298   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.747304   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.750173   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.750663   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.750685   23379 pod_ready.go:82] duration metric: took 6.851528ms for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.750697   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.750762   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-g48q9
	I1202 11:49:00.750773   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.750782   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.750787   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.753393   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.754225   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:00.754242   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.754253   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.754261   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.756959   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.757348   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.757363   23379 pod_ready.go:82] duration metric: took 6.658502ms for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.757372   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.757427   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935
	I1202 11:49:00.757438   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.757444   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.757449   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.759919   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.760524   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:00.760540   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.760551   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.760557   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.762639   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.763103   23379 pod_ready.go:93] pod "etcd-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.763117   23379 pod_ready.go:82] duration metric: took 5.738836ms for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.763130   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.763170   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:49:00.763178   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.763184   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.763187   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.765295   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.765840   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:00.765853   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.765859   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.765866   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.767856   23379 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:49:00.768294   23379 pod_ready.go:93] pod "etcd-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.768308   23379 pod_ready.go:82] duration metric: took 5.173078ms for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.768315   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.928568   23379 request.go:632] Waited for 160.204775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m03
	I1202 11:49:00.928622   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m03
	I1202 11:49:00.928630   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.928637   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.928644   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.931639   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:01.129121   23379 request.go:632] Waited for 196.362858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:01.129188   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:01.129194   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.129201   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.129206   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.132093   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:01.132639   23379 pod_ready.go:93] pod "etcd-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:01.132663   23379 pod_ready.go:82] duration metric: took 364.340751ms for pod "etcd-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.132685   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.328581   23379 request.go:632] Waited for 195.818618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935
	I1202 11:49:01.328640   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935
	I1202 11:49:01.328645   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.328651   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.328659   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.332129   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:01.528887   23379 request.go:632] Waited for 196.197458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:01.528960   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:01.528968   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.528983   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.528991   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.531764   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:01.532366   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:01.532385   23379 pod_ready.go:82] duration metric: took 399.689084ms for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.532395   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.729145   23379 request.go:632] Waited for 196.686289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:49:01.729214   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:49:01.729222   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.729232   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.729241   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.732550   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:01.928940   23379 request.go:632] Waited for 195.375728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:01.929027   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:01.929039   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.929049   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.929060   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.932849   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:01.933394   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:01.933415   23379 pod_ready.go:82] duration metric: took 401.013286ms for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.933428   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.128618   23379 request.go:632] Waited for 195.115216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m03
	I1202 11:49:02.128692   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m03
	I1202 11:49:02.128704   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.128714   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.128744   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.132085   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:02.328195   23379 request.go:632] Waited for 195.287157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:02.328272   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:02.328280   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.328290   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.328294   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.331350   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:02.332062   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:02.332086   23379 pod_ready.go:82] duration metric: took 398.648799ms for pod "kube-apiserver-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.332096   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.528402   23379 request.go:632] Waited for 196.237056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:49:02.528456   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:49:02.528461   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.528468   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.528471   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.531001   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:02.729030   23379 request.go:632] Waited for 197.344265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:02.729083   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:02.729088   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.729095   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.729101   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.733927   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:02.734415   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:02.734433   23379 pod_ready.go:82] duration metric: took 402.330362ms for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.734442   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.928547   23379 request.go:632] Waited for 194.020533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:49:02.928615   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:49:02.928624   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.928634   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.928644   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.933547   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:03.128827   23379 request.go:632] Waited for 194.344486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:03.128890   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:03.128895   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.128915   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.128921   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.133610   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:03.134316   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:03.134333   23379 pod_ready.go:82] duration metric: took 399.884969ms for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.134345   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.328421   23379 request.go:632] Waited for 194.000988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m03
	I1202 11:49:03.328488   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m03
	I1202 11:49:03.328493   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.328500   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.328505   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.331240   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:03.528448   23379 request.go:632] Waited for 196.353439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.528524   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.528532   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.528542   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.528554   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.532267   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:03.532704   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:03.532722   23379 pod_ready.go:82] duration metric: took 398.368333ms for pod "kube-controller-manager-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.532747   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rp7t2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.728896   23379 request.go:632] Waited for 196.080235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rp7t2
	I1202 11:49:03.728966   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rp7t2
	I1202 11:49:03.728972   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.728979   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.728982   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.732009   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:03.929024   23379 request.go:632] Waited for 196.282412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.929090   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.929096   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.929106   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.929111   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.932496   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:03.933154   23379 pod_ready.go:93] pod "kube-proxy-rp7t2" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:03.933174   23379 pod_ready.go:82] duration metric: took 400.416355ms for pod "kube-proxy-rp7t2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.933184   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.128132   23379 request.go:632] Waited for 194.87576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:49:04.128183   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:49:04.128188   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.128196   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.128200   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.131316   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:04.328392   23379 request.go:632] Waited for 196.344562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:04.328464   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:04.328472   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.328488   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.328504   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.331622   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:04.332330   23379 pod_ready.go:93] pod "kube-proxy-tqcb6" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:04.332349   23379 pod_ready.go:82] duration metric: took 399.158434ms for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.332362   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.528404   23379 request.go:632] Waited for 195.973025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:49:04.528476   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:49:04.528485   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.528499   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.528512   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.531287   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:04.728831   23379 request.go:632] Waited for 196.723103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:04.728880   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:04.728888   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.728918   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.728926   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.731917   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:04.732716   23379 pod_ready.go:93] pod "kube-proxy-w9r4x" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:04.732733   23379 pod_ready.go:82] duration metric: took 400.363929ms for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.732741   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.928126   23379 request.go:632] Waited for 195.328391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:49:04.928208   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:49:04.928219   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.928242   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.928251   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.931908   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.129033   23379 request.go:632] Waited for 196.165096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:05.129107   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:05.129114   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.129124   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.129131   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.132837   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.133502   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:05.133521   23379 pod_ready.go:82] duration metric: took 400.774358ms for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.133531   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.328705   23379 request.go:632] Waited for 195.110801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:49:05.328775   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:49:05.328782   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.328792   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.328804   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.332423   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.528425   23379 request.go:632] Waited for 195.360611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:05.528479   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:05.528484   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.528491   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.528494   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.531378   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:05.531939   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:05.531957   23379 pod_ready.go:82] duration metric: took 398.419577ms for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.531967   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.728987   23379 request.go:632] Waited for 196.947438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m03
	I1202 11:49:05.729040   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m03
	I1202 11:49:05.729045   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.729052   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.729056   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.732940   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.928937   23379 request.go:632] Waited for 195.348906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:05.928990   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:05.928996   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.929007   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.929023   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.932936   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.933995   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:05.934013   23379 pod_ready.go:82] duration metric: took 402.03942ms for pod "kube-scheduler-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.934028   23379 pod_ready.go:39] duration metric: took 5.202257007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:49:05.934044   23379 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:49:05.934111   23379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:49:05.950308   23379 api_server.go:72] duration metric: took 21.073692026s to wait for apiserver process to appear ...
	I1202 11:49:05.950330   23379 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:49:05.950350   23379 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1202 11:49:05.954392   23379 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1202 11:49:05.954463   23379 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1202 11:49:05.954472   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.954479   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.954484   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.955264   23379 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1202 11:49:05.955324   23379 api_server.go:141] control plane version: v1.31.2
	I1202 11:49:05.955340   23379 api_server.go:131] duration metric: took 5.002951ms to wait for apiserver health ...
	I1202 11:49:05.955348   23379 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:49:06.128765   23379 request.go:632] Waited for 173.340291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.128831   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.128854   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.128868   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.128878   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.134738   23379 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:49:06.141415   23379 system_pods.go:59] 24 kube-system pods found
	I1202 11:49:06.141437   23379 system_pods.go:61] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:49:06.141442   23379 system_pods.go:61] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:49:06.141446   23379 system_pods.go:61] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:49:06.141449   23379 system_pods.go:61] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:49:06.141453   23379 system_pods.go:61] "etcd-ha-604935-m03" [2de6c192-755f-43c7-a973-b1137b03c49f] Running
	I1202 11:49:06.141457   23379 system_pods.go:61] "kindnet-j4cr6" [07287f32-1272-4735-bb43-88f862b28657] Running
	I1202 11:49:06.141461   23379 system_pods.go:61] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:49:06.141464   23379 system_pods.go:61] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:49:06.141468   23379 system_pods.go:61] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:49:06.141471   23379 system_pods.go:61] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:49:06.141475   23379 system_pods.go:61] "kube-apiserver-ha-604935-m03" [74b078f5-560f-4077-be17-91f7add9545f] Running
	I1202 11:49:06.141479   23379 system_pods.go:61] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:49:06.141487   23379 system_pods.go:61] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:49:06.141494   23379 system_pods.go:61] "kube-controller-manager-ha-604935-m03" [445254dd-244a-4f40-9a0c-362bd03686c3] Running
	I1202 11:49:06.141507   23379 system_pods.go:61] "kube-proxy-rp7t2" [84b2dba2-d1be-49b6-addc-a9d919ef683e] Running
	I1202 11:49:06.141512   23379 system_pods.go:61] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:49:06.141517   23379 system_pods.go:61] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:49:06.141523   23379 system_pods.go:61] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:49:06.141527   23379 system_pods.go:61] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:49:06.141531   23379 system_pods.go:61] "kube-scheduler-ha-604935-m03" [45cc93ef-1da2-469b-a0de-8bc9b8383094] Running
	I1202 11:49:06.141534   23379 system_pods.go:61] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:49:06.141540   23379 system_pods.go:61] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:49:06.141543   23379 system_pods.go:61] "kube-vip-ha-604935-m03" [5c5c4e09-5ad1-4b08-8ea3-84260528b78e] Running
	I1202 11:49:06.141545   23379 system_pods.go:61] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:49:06.141551   23379 system_pods.go:74] duration metric: took 186.197102ms to wait for pod list to return data ...
	I1202 11:49:06.141560   23379 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:49:06.329008   23379 request.go:632] Waited for 187.367529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:49:06.329100   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:49:06.329113   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.329125   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.329130   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.332755   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:06.332967   23379 default_sa.go:45] found service account: "default"
	I1202 11:49:06.332983   23379 default_sa.go:55] duration metric: took 191.417488ms for default service account to be created ...
	I1202 11:49:06.332991   23379 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:49:06.528293   23379 request.go:632] Waited for 195.242273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.528366   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.528375   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.528382   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.528388   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.533257   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:06.539940   23379 system_pods.go:86] 24 kube-system pods found
	I1202 11:49:06.539965   23379 system_pods.go:89] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:49:06.539970   23379 system_pods.go:89] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:49:06.539976   23379 system_pods.go:89] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:49:06.539980   23379 system_pods.go:89] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:49:06.539983   23379 system_pods.go:89] "etcd-ha-604935-m03" [2de6c192-755f-43c7-a973-b1137b03c49f] Running
	I1202 11:49:06.539986   23379 system_pods.go:89] "kindnet-j4cr6" [07287f32-1272-4735-bb43-88f862b28657] Running
	I1202 11:49:06.539989   23379 system_pods.go:89] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:49:06.539995   23379 system_pods.go:89] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:49:06.539998   23379 system_pods.go:89] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:49:06.540002   23379 system_pods.go:89] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:49:06.540006   23379 system_pods.go:89] "kube-apiserver-ha-604935-m03" [74b078f5-560f-4077-be17-91f7add9545f] Running
	I1202 11:49:06.540009   23379 system_pods.go:89] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:49:06.540013   23379 system_pods.go:89] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:49:06.540016   23379 system_pods.go:89] "kube-controller-manager-ha-604935-m03" [445254dd-244a-4f40-9a0c-362bd03686c3] Running
	I1202 11:49:06.540020   23379 system_pods.go:89] "kube-proxy-rp7t2" [84b2dba2-d1be-49b6-addc-a9d919ef683e] Running
	I1202 11:49:06.540024   23379 system_pods.go:89] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:49:06.540028   23379 system_pods.go:89] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:49:06.540034   23379 system_pods.go:89] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:49:06.540037   23379 system_pods.go:89] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:49:06.540040   23379 system_pods.go:89] "kube-scheduler-ha-604935-m03" [45cc93ef-1da2-469b-a0de-8bc9b8383094] Running
	I1202 11:49:06.540043   23379 system_pods.go:89] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:49:06.540046   23379 system_pods.go:89] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:49:06.540049   23379 system_pods.go:89] "kube-vip-ha-604935-m03" [5c5c4e09-5ad1-4b08-8ea3-84260528b78e] Running
	I1202 11:49:06.540053   23379 system_pods.go:89] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:49:06.540058   23379 system_pods.go:126] duration metric: took 207.062281ms to wait for k8s-apps to be running ...
	I1202 11:49:06.540068   23379 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:49:06.540106   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:49:06.555319   23379 system_svc.go:56] duration metric: took 15.24289ms WaitForService to wait for kubelet
	I1202 11:49:06.555341   23379 kubeadm.go:582] duration metric: took 21.678727669s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:49:06.555356   23379 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:49:06.728222   23379 request.go:632] Waited for 172.787542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1202 11:49:06.728311   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1202 11:49:06.728317   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.728327   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.728332   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.731784   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:06.733040   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:49:06.733062   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:49:06.733074   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:49:06.733079   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:49:06.733084   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:49:06.733088   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:49:06.733094   23379 node_conditions.go:105] duration metric: took 177.727321ms to run NodePressure ...
	I1202 11:49:06.733107   23379 start.go:241] waiting for startup goroutines ...
	I1202 11:49:06.733138   23379 start.go:255] writing updated cluster config ...
	I1202 11:49:06.733452   23379 ssh_runner.go:195] Run: rm -f paused
	I1202 11:49:06.787558   23379 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 11:49:06.789249   23379 out.go:177] * Done! kubectl is now configured to use "ha-604935" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.360211795Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ad46ce0-99b7-42c9-ad8f-4492f39e543f name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.361327054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac75360c-9f5d-44c7-a6dd-04820b21635a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.362018244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140376361996817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac75360c-9f5d-44c7-a6dd-04820b21635a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.362523661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90f918a6-86d2-4799-a280-e207d4055fb7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.362579280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90f918a6-86d2-4799-a280-e207d4055fb7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.362799575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90f918a6-86d2-4799-a280-e207d4055fb7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.400021002Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27151781-f592-43a6-b3c3-f3d153e8eece name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.400296075Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-8jxc4,Uid:f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733140148279592187,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:49:07.666936685Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1023dda9-1199-4200-9b82-bb054a0eedff,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1733140013381225285,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-02T11:46:53.065981152Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-g48q9,Uid:66ce87a9-4918-45fd-9721-d4e6323b7b54,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733140013379375022,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:46:53.065488407Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-5gcc2,Uid:63fea190-8001-4264-a579-13a9cae6ddff,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1733140013372020076,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63fea190-8001-4264-a579-13a9cae6ddff,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:46:53.058488150Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&PodSandboxMetadata{Name:kindnet-k99r8,Uid:e5466844-1f48-46c2-8e34-c4bf016b9656,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139999159079477,Labels:map[string]string{app: kindnet,controller-revision-hash: 65ddb8b87b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:46:38.840314062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&PodSandboxMetadata{Name:kube-proxy-tqcb6,Uid:d576fbb5-bee1-4482-82f5-b21a5e1e65f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139999157955919,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:46:38.836053895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-604935,Uid:3795b7eb129e1555193fc4481f415c61,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1733139987835770182,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3795b7eb129e1555193fc4481f415c61,kubernetes.io/config.seen: 2024-12-02T11:46:27.334541833Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-604935,Uid:e34a31690bf4b94086a296305429f2bd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139987829372109,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{kubernetes.io/config.hash: e34a
31690bf4b94086a296305429f2bd,kubernetes.io/config.seen: 2024-12-02T11:46:27.334542605Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-604935,Uid:1298b086a2bd0a1c4a6a3d5c72224eab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139987825890188,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.102:8443,kubernetes.io/config.hash: 1298b086a2bd0a1c4a6a3d5c72224eab,kubernetes.io/config.seen: 2024-12-02T11:46:27.334538959Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Met
adata:&PodSandboxMetadata{Name:etcd-ha-604935,Uid:7e46709c5369afc1ad72a60c327e7e03,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139987807865871,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.102:2379,kubernetes.io/config.hash: 7e46709c5369afc1ad72a60c327e7e03,kubernetes.io/config.seen: 2024-12-02T11:46:27.334535639Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-604935,Uid:367ab693a9f84a18356ae64542b127be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139987806690295,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 367ab693a9f84a18356ae64542b127be,kubernetes.io/config.seen: 2024-12-02T11:46:27.334540819Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=27151781-f592-43a6-b3c3-f3d153e8eece name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.401111582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2da9f75-3f88-41e7-80e1-a7099e3d0ab1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.401165862Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2da9f75-3f88-41e7-80e1-a7099e3d0ab1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.401870758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2da9f75-3f88-41e7-80e1-a7099e3d0ab1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.408660183Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=222d7093-f98d-4558-b687-fcb8556e75fb name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.408809201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=222d7093-f98d-4558-b687-fcb8556e75fb name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.410009526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f106df7d-75d3-4b90-8780-14a89186caa5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.410410006Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140376410394395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f106df7d-75d3-4b90-8780-14a89186caa5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.411057336Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5f9fbe8-5226-4b26-a6f1-00825057de18 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.411099655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5f9fbe8-5226-4b26-a6f1-00825057de18 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.411295681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5f9fbe8-5226-4b26-a6f1-00825057de18 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.453768918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50201f1e-53ce-40c2-83cf-6e13ffffdfb7 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.453829051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50201f1e-53ce-40c2-83cf-6e13ffffdfb7 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.455089850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfc7e6b1-ed7e-412f-8f2a-813bbe99f92e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.455694816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140376455672386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfc7e6b1-ed7e-412f-8f2a-813bbe99f92e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.456097227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7359c0a8-20f9-4505-8878-f80d4c77788a name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.456145598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7359c0a8-20f9-4505-8878-f80d4c77788a name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:52:56 ha-604935 crio[658]: time="2024-12-02 11:52:56.456541296Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7359c0a8-20f9-4505-8878-f80d4c77788a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	27068dc5178bb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   1f0c13e663748       busybox-7dff88458-8jxc4
	be0c4adffd61b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   72cc1a04d8965       coredns-7c65d6cfc9-g48q9
	91c90e9d05cf7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   abbb2caf2ff00       coredns-7c65d6cfc9-5gcc2
	9d7d77b59569b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   40752b9892351       storage-provisioner
	579b11920d9fd       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   646eade60f2d2       kindnet-k99r8
	f6a700874f779       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   8ba57f92e62cd       kube-proxy-tqcb6
	17bfa0393f187       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   096eb67e8b05d       kube-vip-ha-604935
	275d716cfd4f7       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   8978121739b66       kube-controller-manager-ha-604935
	090e4a0254277       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   1989811c4f393       kube-scheduler-ha-604935
	53184ed95349a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   ec95830bfe24d       etcd-ha-604935
	9624bba327f9b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   fc4151eee5a3f       kube-apiserver-ha-604935
	
	
	==> coredns [91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f] <==
	[INFO] 10.244.0.4:39323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215731s
	[INFO] 10.244.0.4:33525 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162613s
	[INFO] 10.244.0.4:39123 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125815s
	[INFO] 10.244.0.4:37376 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000244786s
	[INFO] 10.244.2.2:44210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174232s
	[INFO] 10.244.2.2:54748 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001765833s
	[INFO] 10.244.2.2:60174 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000284786s
	[INFO] 10.244.2.2:50584 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109022s
	[INFO] 10.244.2.2:34854 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001186229s
	[INFO] 10.244.2.2:42659 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081441s
	[INFO] 10.244.2.2:51018 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119851s
	[INFO] 10.244.1.2:51189 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001371264s
	[INFO] 10.244.1.2:57162 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158703s
	[INFO] 10.244.0.4:59693 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068002s
	[INFO] 10.244.0.4:51163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042042s
	[INFO] 10.244.2.2:40625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117188s
	[INFO] 10.244.1.2:49002 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091339s
	[INFO] 10.244.1.2:42507 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192925s
	[INFO] 10.244.0.4:36452 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215238s
	[INFO] 10.244.0.4:41389 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010969s
	[INFO] 10.244.2.2:55194 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000180309s
	[INFO] 10.244.2.2:45875 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109142s
	[INFO] 10.244.1.2:42301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164839s
	[INFO] 10.244.1.2:47133 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176562s
	[INFO] 10.244.1.2:42848 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122646s
	
	
	==> coredns [be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818] <==
	[INFO] 10.244.1.2:33047 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000108391s
	[INFO] 10.244.1.2:40927 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001980013s
	[INFO] 10.244.0.4:37566 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004168289s
	[INFO] 10.244.0.4:36737 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252503s
	[INFO] 10.244.0.4:33046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003375406s
	[INFO] 10.244.0.4:42598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177128s
	[INFO] 10.244.2.2:46358 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148802s
	[INFO] 10.244.1.2:55837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194128s
	[INFO] 10.244.1.2:55278 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002096061s
	[INFO] 10.244.1.2:45640 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141771s
	[INFO] 10.244.1.2:36834 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000204172s
	[INFO] 10.244.1.2:41503 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00026722s
	[INFO] 10.244.1.2:46043 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001413s
	[INFO] 10.244.0.4:37544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011909s
	[INFO] 10.244.0.4:58597 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007644s
	[INFO] 10.244.2.2:41510 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179912s
	[INFO] 10.244.2.2:41733 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013607s
	[INFO] 10.244.2.2:57759 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000205972s
	[INFO] 10.244.1.2:54620 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248357s
	[INFO] 10.244.1.2:40630 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109148s
	[INFO] 10.244.0.4:39309 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113844s
	[INFO] 10.244.0.4:42691 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170784s
	[INFO] 10.244.2.2:41138 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112783s
	[INFO] 10.244.2.2:32778 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073017s
	[INFO] 10.244.1.2:42298 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018329s
	
	
	==> describe nodes <==
	Name:               ha-604935
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T11_46_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:46:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:52:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-604935
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4653179aa8d04165a06718969a078842
	  System UUID:                4653179a-a8d0-4165-a067-18969a078842
	  Boot ID:                    059fb5e8-3774-458b-bfbf-8364817017d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8jxc4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 coredns-7c65d6cfc9-5gcc2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m18s
	  kube-system                 coredns-7c65d6cfc9-g48q9             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m18s
	  kube-system                 etcd-ha-604935                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m22s
	  kube-system                 kindnet-k99r8                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m18s
	  kube-system                 kube-apiserver-ha-604935             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-controller-manager-ha-604935    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-proxy-tqcb6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-scheduler-ha-604935             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-vip-ha-604935                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m16s                  kube-proxy       
	  Normal  Starting                 6m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m22s (x2 over 6m22s)  kubelet          Node ha-604935 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s (x2 over 6m22s)  kubelet          Node ha-604935 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s (x2 over 6m22s)  kubelet          Node ha-604935 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m19s                  node-controller  Node ha-604935 event: Registered Node ha-604935 in Controller
	  Normal  NodeReady                6m3s                   kubelet          Node ha-604935 status is now: NodeReady
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-604935 event: Registered Node ha-604935 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-604935 event: Registered Node ha-604935 in Controller
	
	
	Name:               ha-604935-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_47_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:47:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:50:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    ha-604935-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f21093f5748416fa30ea8181c31a3f7
	  System UUID:                0f21093f-5748-416f-a30e-a8181c31a3f7
	  Boot ID:                    5621b6a5-bb1a-408d-b692-10c4aad4b418
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xbb9t                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 etcd-ha-604935-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m24s
	  kube-system                 kindnet-l55rq                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m26s
	  kube-system                 kube-apiserver-ha-604935-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-controller-manager-ha-604935-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-w9r4x                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-scheduler-ha-604935-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-vip-ha-604935-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m26s (x8 over 5m26s)  kubelet          Node ha-604935-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s (x8 over 5m26s)  kubelet          Node ha-604935-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s (x7 over 5m26s)  kubelet          Node ha-604935-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-604935-m02 event: Registered Node ha-604935-m02 in Controller
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-604935-m02 event: Registered Node ha-604935-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-604935-m02 event: Registered Node ha-604935-m02 in Controller
	  Normal  NodeNotReady             111s                   node-controller  Node ha-604935-m02 status is now: NodeNotReady
	
	
	Name:               ha-604935-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_48_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:48:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:52:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:48:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:48:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:48:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:49:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    ha-604935-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8588450b38914bf3ac287b253d72fb4d
	  System UUID:                8588450b-3891-4bf3-ac28-7b253d72fb4d
	  Boot ID:                    735a98f4-21e5-4433-a99b-76bab3cbd392
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l5kq7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 etcd-ha-604935-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m14s
	  kube-system                 kindnet-j4cr6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m16s
	  kube-system                 kube-apiserver-ha-604935-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-controller-manager-ha-604935-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-proxy-rp7t2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-scheduler-ha-604935-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-vip-ha-604935-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m16s (x8 over 4m16s)  kubelet          Node ha-604935-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s (x8 over 4m16s)  kubelet          Node ha-604935-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s (x7 over 4m16s)  kubelet          Node ha-604935-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-604935-m03 event: Registered Node ha-604935-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-604935-m03 event: Registered Node ha-604935-m03 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-604935-m03 event: Registered Node ha-604935-m03 in Controller
	
	
	Name:               ha-604935-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_49_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:49:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:52:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:49:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:49:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:49:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:50:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    ha-604935-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 577fefe5032840e68ccf6ba2b6fbcf44
	  System UUID:                577fefe5-0328-40e6-8ccf-6ba2b6fbcf44
	  Boot ID:                    5f3dbc6d-6884-49f4-acef-8235bb29f467
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwxsc       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m11s
	  kube-system                 kube-proxy-v649d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m12s (x2 over 3m12s)  kubelet          Node ha-604935-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m12s (x2 over 3m12s)  kubelet          Node ha-604935-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s (x2 over 3m12s)  kubelet          Node ha-604935-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m11s                  cidrAllocator    Node ha-604935-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-604935-m04 event: Registered Node ha-604935-m04 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-604935-m04 event: Registered Node ha-604935-m04 in Controller
	  Normal  RegisteredNode           3m7s                   node-controller  Node ha-604935-m04 event: Registered Node ha-604935-m04 in Controller
	  Normal  NodeReady                2m52s                  kubelet          Node ha-604935-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 2 11:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051551] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040036] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec 2 11:46] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.564296] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.579239] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.318373] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.060168] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057883] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.148672] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.135107] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.277991] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.959381] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +4.016173] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.058991] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.327237] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.069565] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.092272] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.163087] kauditd_printk_skb: 38 callbacks suppressed
	[Dec 2 11:47] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46] <==
	{"level":"warn","ts":"2024-12-02T11:52:56.668599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.671904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.707951Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.716246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.719900Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.727665Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.733562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.742136Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.745209Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.748604Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.759661Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.765615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.768921Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.772379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.775262Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.778041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.783053Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.788565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.794661Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.797489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.800900Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.803802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.809299Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.814681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:52:56.869034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:52:56 up 7 min,  0 users,  load average: 0.66, 0.43, 0.19
	Linux ha-604935 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10] <==
	I1202 11:52:22.911211       1 main.go:301] handling current node
	I1202 11:52:32.901182       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1202 11:52:32.901286       1 main.go:301] handling current node
	I1202 11:52:32.901341       1 main.go:297] Handling node with IPs: map[192.168.39.96:{}]
	I1202 11:52:32.901493       1 main.go:324] Node ha-604935-m02 has CIDR [10.244.1.0/24] 
	I1202 11:52:32.901812       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1202 11:52:32.901855       1 main.go:324] Node ha-604935-m03 has CIDR [10.244.2.0/24] 
	I1202 11:52:32.902073       1 main.go:297] Handling node with IPs: map[192.168.39.26:{}]
	I1202 11:52:32.903249       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	I1202 11:52:42.901238       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1202 11:52:42.901327       1 main.go:301] handling current node
	I1202 11:52:42.901361       1 main.go:297] Handling node with IPs: map[192.168.39.96:{}]
	I1202 11:52:42.901380       1 main.go:324] Node ha-604935-m02 has CIDR [10.244.1.0/24] 
	I1202 11:52:42.901720       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1202 11:52:42.901758       1 main.go:324] Node ha-604935-m03 has CIDR [10.244.2.0/24] 
	I1202 11:52:42.903817       1 main.go:297] Handling node with IPs: map[192.168.39.26:{}]
	I1202 11:52:42.903856       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	I1202 11:52:52.900618       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1202 11:52:52.900723       1 main.go:301] handling current node
	I1202 11:52:52.900742       1 main.go:297] Handling node with IPs: map[192.168.39.96:{}]
	I1202 11:52:52.900750       1 main.go:324] Node ha-604935-m02 has CIDR [10.244.1.0/24] 
	I1202 11:52:52.901396       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1202 11:52:52.901501       1 main.go:324] Node ha-604935-m03 has CIDR [10.244.2.0/24] 
	I1202 11:52:52.901876       1 main.go:297] Handling node with IPs: map[192.168.39.26:{}]
	I1202 11:52:52.901972       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6] <==
	I1202 11:46:32.842650       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 11:46:32.848385       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102]
	I1202 11:46:32.849164       1 controller.go:615] quota admission added evaluator for: endpoints
	I1202 11:46:32.859606       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 11:46:33.159098       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1202 11:46:34.294370       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1202 11:46:34.315176       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	http2: server: error reading preface from client 192.168.39.254:47786: read tcp 192.168.39.254:8443->192.168.39.254:47786: read: connection reset by peer
	I1202 11:46:34.492102       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1202 11:46:38.758671       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1202 11:46:38.805955       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1202 11:49:11.846753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54452: use of closed network connection
	E1202 11:49:12.028104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54460: use of closed network connection
	E1202 11:49:12.199806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54474: use of closed network connection
	E1202 11:49:12.392612       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54484: use of closed network connection
	E1202 11:49:12.562047       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54506: use of closed network connection
	E1202 11:49:12.747509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54530: use of closed network connection
	E1202 11:49:12.939816       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54544: use of closed network connection
	E1202 11:49:13.121199       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54562: use of closed network connection
	E1202 11:49:13.295085       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54584: use of closed network connection
	E1202 11:49:13.578607       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54612: use of closed network connection
	E1202 11:49:13.757972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54638: use of closed network connection
	E1202 11:49:14.099757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54676: use of closed network connection
	E1202 11:49:14.269710       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54694: use of closed network connection
	E1202 11:49:14.441652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54710: use of closed network connection
	
	
	==> kube-controller-manager [275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41] <==
	I1202 11:49:45.139269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.144540       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.233566       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.349805       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.679160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:47.939032       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-604935-m04"
	I1202 11:49:47.939241       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:47.969287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:49.605926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:49.681129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:55.357132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:04.214872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:04.215953       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-604935-m04"
	I1202 11:50:04.236833       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:04.619357       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:15.555711       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:51:05.313473       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	I1202 11:51:05.313596       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-604935-m04"
	I1202 11:51:05.338955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	I1202 11:51:05.387666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.010033ms"
	I1202 11:51:05.388828       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.832µs"
	I1202 11:51:05.441675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.06791ms"
	I1202 11:51:05.442993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.629µs"
	I1202 11:51:07.990253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	I1202 11:51:10.625653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	
	
	==> kube-proxy [f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1202 11:46:39.991996       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1202 11:46:40.020254       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	E1202 11:46:40.020650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 11:46:40.086409       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1202 11:46:40.086557       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 11:46:40.086602       1 server_linux.go:169] "Using iptables Proxier"
	I1202 11:46:40.089997       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 11:46:40.090696       1 server.go:483] "Version info" version="v1.31.2"
	I1202 11:46:40.090739       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 11:46:40.096206       1 config.go:199] "Starting service config controller"
	I1202 11:46:40.096522       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 11:46:40.096732       1 config.go:105] "Starting endpoint slice config controller"
	I1202 11:46:40.096763       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 11:46:40.098314       1 config.go:328] "Starting node config controller"
	I1202 11:46:40.099010       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 11:46:40.196939       1 shared_informer.go:320] Caches are synced for service config
	I1202 11:46:40.197006       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 11:46:40.199281       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35] <==
	W1202 11:46:32.142852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1202 11:46:32.142937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.153652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 11:46:32.153702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.221641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 11:46:32.221961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.358170       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1202 11:46:32.358291       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1202 11:46:32.429924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 11:46:32.430007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.430758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 11:46:32.430825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.449596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 11:46:32.449697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.505859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1202 11:46:32.505943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1202 11:46:34.815786       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1202 11:49:07.673886       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xbb9t\": pod busybox-7dff88458-xbb9t is already assigned to node \"ha-604935-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-xbb9t" node="ha-604935-m02"
	E1202 11:49:07.674510       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fc236bbd-f34b-454f-a66d-b369cd19cf9d(default/busybox-7dff88458-xbb9t) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-xbb9t"
	E1202 11:49:07.674758       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8jxc4\": pod busybox-7dff88458-8jxc4 is already assigned to node \"ha-604935\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8jxc4" node="ha-604935"
	E1202 11:49:07.675368       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb(default/busybox-7dff88458-8jxc4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8jxc4"
	E1202 11:49:07.675694       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8jxc4\": pod busybox-7dff88458-8jxc4 is already assigned to node \"ha-604935\"" pod="default/busybox-7dff88458-8jxc4"
	I1202 11:49:07.676018       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8jxc4" node="ha-604935"
	E1202 11:49:07.678080       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xbb9t\": pod busybox-7dff88458-xbb9t is already assigned to node \"ha-604935-m02\"" pod="default/busybox-7dff88458-xbb9t"
	I1202 11:49:07.679000       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-xbb9t" node="ha-604935-m02"
	
	
	==> kubelet <==
	Dec 02 11:51:34 ha-604935 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 11:51:34 ha-604935 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 11:51:34 ha-604935 kubelet[1316]: E1202 11:51:34.518783    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140294518371858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:34 ha-604935 kubelet[1316]: E1202 11:51:34.518905    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140294518371858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:44 ha-604935 kubelet[1316]: E1202 11:51:44.520250    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140304520009698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:44 ha-604935 kubelet[1316]: E1202 11:51:44.520275    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140304520009698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:54 ha-604935 kubelet[1316]: E1202 11:51:54.524305    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140314523474300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:54 ha-604935 kubelet[1316]: E1202 11:51:54.524384    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140314523474300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:04 ha-604935 kubelet[1316]: E1202 11:52:04.526662    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140324526379785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:04 ha-604935 kubelet[1316]: E1202 11:52:04.526711    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140324526379785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:14 ha-604935 kubelet[1316]: E1202 11:52:14.527977    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140334527643926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:14 ha-604935 kubelet[1316]: E1202 11:52:14.528325    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140334527643926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:24 ha-604935 kubelet[1316]: E1202 11:52:24.530019    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140344529552485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:24 ha-604935 kubelet[1316]: E1202 11:52:24.530407    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140344529552485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:34 ha-604935 kubelet[1316]: E1202 11:52:34.436289    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 02 11:52:34 ha-604935 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 11:52:34 ha-604935 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 11:52:34 ha-604935 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 11:52:34 ha-604935 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 11:52:34 ha-604935 kubelet[1316]: E1202 11:52:34.531571    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140354531272131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:34 ha-604935 kubelet[1316]: E1202 11:52:34.531618    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140354531272131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:44 ha-604935 kubelet[1316]: E1202 11:52:44.532768    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140364532554842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:44 ha-604935 kubelet[1316]: E1202 11:52:44.532808    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140364532554842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:54 ha-604935 kubelet[1316]: E1202 11:52:54.535693    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140374535334388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:54 ha-604935 kubelet[1316]: E1202 11:52:54.535796    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140374535334388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-604935 -n ha-604935
helpers_test.go:261: (dbg) Run:  kubectl --context ha-604935 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.299080646s)
ha_test.go:309: expected profile "ha-604935" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-604935\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-604935\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-604935\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.102\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.96\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.211\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.26\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\"
:false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"
MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-604935 -n ha-604935
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-604935 logs -n 25: (1.38132992s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935:/home/docker/cp-test_ha-604935-m03_ha-604935.txt                       |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935 sudo cat                                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935.txt                                 |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m02:/home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m02 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04:/home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m04 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp testdata/cp-test.txt                                                | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1292724683/001/cp-test_ha-604935-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935:/home/docker/cp-test_ha-604935-m04_ha-604935.txt                       |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935 sudo cat                                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935.txt                                 |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m02:/home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m02 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03:/home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m03 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-604935 node stop m02 -v=7                                                     | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-604935 node start m02 -v=7                                                    | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:45:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:45:51.477333   23379 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:45:51.477429   23379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:51.477436   23379 out.go:358] Setting ErrFile to fd 2...
	I1202 11:45:51.477440   23379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:51.477579   23379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:45:51.478080   23379 out.go:352] Setting JSON to false
	I1202 11:45:51.478853   23379 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1703,"bootTime":1733138248,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:45:51.478907   23379 start.go:139] virtualization: kvm guest
	I1202 11:45:51.480873   23379 out.go:177] * [ha-604935] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:45:51.482060   23379 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:45:51.482068   23379 notify.go:220] Checking for updates...
	I1202 11:45:51.484245   23379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:45:51.485502   23379 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:45:51.486630   23379 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:51.487842   23379 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:45:51.488928   23379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:45:51.490194   23379 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:45:51.523210   23379 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 11:45:51.524197   23379 start.go:297] selected driver: kvm2
	I1202 11:45:51.524207   23379 start.go:901] validating driver "kvm2" against <nil>
	I1202 11:45:51.524217   23379 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:45:51.524886   23379 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:45:51.524953   23379 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 11:45:51.538752   23379 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 11:45:51.538805   23379 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 11:45:51.539057   23379 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:45:51.539096   23379 cni.go:84] Creating CNI manager for ""
	I1202 11:45:51.539154   23379 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1202 11:45:51.539162   23379 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 11:45:51.539222   23379 start.go:340] cluster config:
	{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1202 11:45:51.539330   23379 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:45:51.540849   23379 out.go:177] * Starting "ha-604935" primary control-plane node in "ha-604935" cluster
	I1202 11:45:51.542035   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:45:51.542064   23379 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:45:51.542073   23379 cache.go:56] Caching tarball of preloaded images
	I1202 11:45:51.542155   23379 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:45:51.542168   23379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:45:51.542474   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:45:51.542495   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json: {Name:mkd56e76e09e18927ad08e110fcb7c73441ee1fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:45:51.542653   23379 start.go:360] acquireMachinesLock for ha-604935: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:45:51.542690   23379 start.go:364] duration metric: took 21.87µs to acquireMachinesLock for "ha-604935"
	I1202 11:45:51.542712   23379 start.go:93] Provisioning new machine with config: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:45:51.542769   23379 start.go:125] createHost starting for "" (driver="kvm2")
	I1202 11:45:51.544215   23379 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 11:45:51.544376   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:45:51.544410   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:45:51.558068   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38897
	I1202 11:45:51.558542   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:45:51.559117   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:45:51.559144   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:45:51.559441   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:45:51.559624   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:45:51.559747   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:45:51.559887   23379 start.go:159] libmachine.API.Create for "ha-604935" (driver="kvm2")
	I1202 11:45:51.559913   23379 client.go:168] LocalClient.Create starting
	I1202 11:45:51.559938   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:45:51.559978   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:45:51.559999   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:45:51.560059   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:45:51.560086   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:45:51.560103   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:45:51.560134   23379 main.go:141] libmachine: Running pre-create checks...
	I1202 11:45:51.560147   23379 main.go:141] libmachine: (ha-604935) Calling .PreCreateCheck
	I1202 11:45:51.560467   23379 main.go:141] libmachine: (ha-604935) Calling .GetConfigRaw
	I1202 11:45:51.560846   23379 main.go:141] libmachine: Creating machine...
	I1202 11:45:51.560861   23379 main.go:141] libmachine: (ha-604935) Calling .Create
	I1202 11:45:51.560982   23379 main.go:141] libmachine: (ha-604935) Creating KVM machine...
	I1202 11:45:51.562114   23379 main.go:141] libmachine: (ha-604935) DBG | found existing default KVM network
	I1202 11:45:51.562698   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:51.562571   23402 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002231e0}
	I1202 11:45:51.562725   23379 main.go:141] libmachine: (ha-604935) DBG | created network xml: 
	I1202 11:45:51.562738   23379 main.go:141] libmachine: (ha-604935) DBG | <network>
	I1202 11:45:51.562750   23379 main.go:141] libmachine: (ha-604935) DBG |   <name>mk-ha-604935</name>
	I1202 11:45:51.562762   23379 main.go:141] libmachine: (ha-604935) DBG |   <dns enable='no'/>
	I1202 11:45:51.562773   23379 main.go:141] libmachine: (ha-604935) DBG |   
	I1202 11:45:51.562781   23379 main.go:141] libmachine: (ha-604935) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1202 11:45:51.562793   23379 main.go:141] libmachine: (ha-604935) DBG |     <dhcp>
	I1202 11:45:51.562803   23379 main.go:141] libmachine: (ha-604935) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1202 11:45:51.562814   23379 main.go:141] libmachine: (ha-604935) DBG |     </dhcp>
	I1202 11:45:51.562827   23379 main.go:141] libmachine: (ha-604935) DBG |   </ip>
	I1202 11:45:51.562839   23379 main.go:141] libmachine: (ha-604935) DBG |   
	I1202 11:45:51.562849   23379 main.go:141] libmachine: (ha-604935) DBG | </network>
	I1202 11:45:51.562861   23379 main.go:141] libmachine: (ha-604935) DBG | 
	I1202 11:45:51.567359   23379 main.go:141] libmachine: (ha-604935) DBG | trying to create private KVM network mk-ha-604935 192.168.39.0/24...
	I1202 11:45:51.627851   23379 main.go:141] libmachine: (ha-604935) DBG | private KVM network mk-ha-604935 192.168.39.0/24 created
	I1202 11:45:51.627878   23379 main.go:141] libmachine: (ha-604935) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935 ...
	I1202 11:45:51.627909   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:51.627845   23402 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:51.627936   23379 main.go:141] libmachine: (ha-604935) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:45:51.627956   23379 main.go:141] libmachine: (ha-604935) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:45:51.873906   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:51.873783   23402 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa...
	I1202 11:45:52.258389   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:52.258298   23402 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/ha-604935.rawdisk...
	I1202 11:45:52.258412   23379 main.go:141] libmachine: (ha-604935) DBG | Writing magic tar header
	I1202 11:45:52.258421   23379 main.go:141] libmachine: (ha-604935) DBG | Writing SSH key tar header
	I1202 11:45:52.258433   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:52.258404   23402 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935 ...
	I1202 11:45:52.258549   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935
	I1202 11:45:52.258587   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:45:52.258600   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935 (perms=drwx------)
	I1202 11:45:52.258612   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:45:52.258622   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:45:52.258639   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:45:52.258670   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:45:52.258686   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:52.258699   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:45:52.258711   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:45:52.258726   23379 main.go:141] libmachine: (ha-604935) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:45:52.258742   23379 main.go:141] libmachine: (ha-604935) Creating domain...
	I1202 11:45:52.258748   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:45:52.258755   23379 main.go:141] libmachine: (ha-604935) DBG | Checking permissions on dir: /home
	I1202 11:45:52.258760   23379 main.go:141] libmachine: (ha-604935) DBG | Skipping /home - not owner
	I1202 11:45:52.259679   23379 main.go:141] libmachine: (ha-604935) define libvirt domain using xml: 
	I1202 11:45:52.259691   23379 main.go:141] libmachine: (ha-604935) <domain type='kvm'>
	I1202 11:45:52.259699   23379 main.go:141] libmachine: (ha-604935)   <name>ha-604935</name>
	I1202 11:45:52.259718   23379 main.go:141] libmachine: (ha-604935)   <memory unit='MiB'>2200</memory>
	I1202 11:45:52.259726   23379 main.go:141] libmachine: (ha-604935)   <vcpu>2</vcpu>
	I1202 11:45:52.259737   23379 main.go:141] libmachine: (ha-604935)   <features>
	I1202 11:45:52.259745   23379 main.go:141] libmachine: (ha-604935)     <acpi/>
	I1202 11:45:52.259755   23379 main.go:141] libmachine: (ha-604935)     <apic/>
	I1202 11:45:52.259762   23379 main.go:141] libmachine: (ha-604935)     <pae/>
	I1202 11:45:52.259776   23379 main.go:141] libmachine: (ha-604935)     
	I1202 11:45:52.259792   23379 main.go:141] libmachine: (ha-604935)   </features>
	I1202 11:45:52.259808   23379 main.go:141] libmachine: (ha-604935)   <cpu mode='host-passthrough'>
	I1202 11:45:52.259826   23379 main.go:141] libmachine: (ha-604935)   
	I1202 11:45:52.259835   23379 main.go:141] libmachine: (ha-604935)   </cpu>
	I1202 11:45:52.259843   23379 main.go:141] libmachine: (ha-604935)   <os>
	I1202 11:45:52.259851   23379 main.go:141] libmachine: (ha-604935)     <type>hvm</type>
	I1202 11:45:52.259863   23379 main.go:141] libmachine: (ha-604935)     <boot dev='cdrom'/>
	I1202 11:45:52.259871   23379 main.go:141] libmachine: (ha-604935)     <boot dev='hd'/>
	I1202 11:45:52.259896   23379 main.go:141] libmachine: (ha-604935)     <bootmenu enable='no'/>
	I1202 11:45:52.259912   23379 main.go:141] libmachine: (ha-604935)   </os>
	I1202 11:45:52.259917   23379 main.go:141] libmachine: (ha-604935)   <devices>
	I1202 11:45:52.259925   23379 main.go:141] libmachine: (ha-604935)     <disk type='file' device='cdrom'>
	I1202 11:45:52.259935   23379 main.go:141] libmachine: (ha-604935)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/boot2docker.iso'/>
	I1202 11:45:52.259939   23379 main.go:141] libmachine: (ha-604935)       <target dev='hdc' bus='scsi'/>
	I1202 11:45:52.259944   23379 main.go:141] libmachine: (ha-604935)       <readonly/>
	I1202 11:45:52.259951   23379 main.go:141] libmachine: (ha-604935)     </disk>
	I1202 11:45:52.259956   23379 main.go:141] libmachine: (ha-604935)     <disk type='file' device='disk'>
	I1202 11:45:52.259963   23379 main.go:141] libmachine: (ha-604935)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:45:52.259970   23379 main.go:141] libmachine: (ha-604935)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/ha-604935.rawdisk'/>
	I1202 11:45:52.259978   23379 main.go:141] libmachine: (ha-604935)       <target dev='hda' bus='virtio'/>
	I1202 11:45:52.259982   23379 main.go:141] libmachine: (ha-604935)     </disk>
	I1202 11:45:52.259992   23379 main.go:141] libmachine: (ha-604935)     <interface type='network'>
	I1202 11:45:52.260000   23379 main.go:141] libmachine: (ha-604935)       <source network='mk-ha-604935'/>
	I1202 11:45:52.260004   23379 main.go:141] libmachine: (ha-604935)       <model type='virtio'/>
	I1202 11:45:52.260011   23379 main.go:141] libmachine: (ha-604935)     </interface>
	I1202 11:45:52.260015   23379 main.go:141] libmachine: (ha-604935)     <interface type='network'>
	I1202 11:45:52.260020   23379 main.go:141] libmachine: (ha-604935)       <source network='default'/>
	I1202 11:45:52.260026   23379 main.go:141] libmachine: (ha-604935)       <model type='virtio'/>
	I1202 11:45:52.260031   23379 main.go:141] libmachine: (ha-604935)     </interface>
	I1202 11:45:52.260035   23379 main.go:141] libmachine: (ha-604935)     <serial type='pty'>
	I1202 11:45:52.260040   23379 main.go:141] libmachine: (ha-604935)       <target port='0'/>
	I1202 11:45:52.260045   23379 main.go:141] libmachine: (ha-604935)     </serial>
	I1202 11:45:52.260050   23379 main.go:141] libmachine: (ha-604935)     <console type='pty'>
	I1202 11:45:52.260059   23379 main.go:141] libmachine: (ha-604935)       <target type='serial' port='0'/>
	I1202 11:45:52.260081   23379 main.go:141] libmachine: (ha-604935)     </console>
	I1202 11:45:52.260097   23379 main.go:141] libmachine: (ha-604935)     <rng model='virtio'>
	I1202 11:45:52.260105   23379 main.go:141] libmachine: (ha-604935)       <backend model='random'>/dev/random</backend>
	I1202 11:45:52.260113   23379 main.go:141] libmachine: (ha-604935)     </rng>
	I1202 11:45:52.260119   23379 main.go:141] libmachine: (ha-604935)     
	I1202 11:45:52.260131   23379 main.go:141] libmachine: (ha-604935)     
	I1202 11:45:52.260139   23379 main.go:141] libmachine: (ha-604935)   </devices>
	I1202 11:45:52.260142   23379 main.go:141] libmachine: (ha-604935) </domain>
	I1202 11:45:52.260148   23379 main.go:141] libmachine: (ha-604935) 
	I1202 11:45:52.264453   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e2:c6:db in network default
	I1202 11:45:52.264963   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:52.264976   23379 main.go:141] libmachine: (ha-604935) Ensuring networks are active...
	I1202 11:45:52.265536   23379 main.go:141] libmachine: (ha-604935) Ensuring network default is active
	I1202 11:45:52.265809   23379 main.go:141] libmachine: (ha-604935) Ensuring network mk-ha-604935 is active
	I1202 11:45:52.266301   23379 main.go:141] libmachine: (ha-604935) Getting domain xml...
	I1202 11:45:52.266972   23379 main.go:141] libmachine: (ha-604935) Creating domain...
	I1202 11:45:53.425942   23379 main.go:141] libmachine: (ha-604935) Waiting to get IP...
	I1202 11:45:53.426812   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:53.427160   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:53.427221   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:53.427145   23402 retry.go:31] will retry after 201.077519ms: waiting for machine to come up
	I1202 11:45:53.629564   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:53.629950   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:53.629976   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:53.629910   23402 retry.go:31] will retry after 339.273732ms: waiting for machine to come up
	I1202 11:45:53.970328   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:53.970740   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:53.970764   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:53.970705   23402 retry.go:31] will retry after 350.772564ms: waiting for machine to come up
	I1202 11:45:54.323244   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:54.323628   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:54.323652   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:54.323595   23402 retry.go:31] will retry after 510.154735ms: waiting for machine to come up
	I1202 11:45:54.834818   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:54.835184   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:54.835211   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:54.835141   23402 retry.go:31] will retry after 497.813223ms: waiting for machine to come up
	I1202 11:45:55.334326   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:55.334697   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:55.334728   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:55.334631   23402 retry.go:31] will retry after 593.538742ms: waiting for machine to come up
	I1202 11:45:55.929133   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:55.929547   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:55.929575   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:55.929508   23402 retry.go:31] will retry after 1.005519689s: waiting for machine to come up
	I1202 11:45:56.936100   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:56.936549   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:56.936581   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:56.936492   23402 retry.go:31] will retry after 1.273475187s: waiting for machine to come up
	I1202 11:45:58.211849   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:58.212240   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:58.212280   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:58.212213   23402 retry.go:31] will retry after 1.292529083s: waiting for machine to come up
	I1202 11:45:59.506572   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:45:59.506909   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:45:59.506934   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:45:59.506880   23402 retry.go:31] will retry after 1.800735236s: waiting for machine to come up
	I1202 11:46:01.309936   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:01.310447   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:01.310467   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:01.310416   23402 retry.go:31] will retry after 2.83980414s: waiting for machine to come up
	I1202 11:46:04.153261   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:04.153728   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:04.153748   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:04.153704   23402 retry.go:31] will retry after 2.497515599s: waiting for machine to come up
	I1202 11:46:06.652765   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:06.653095   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:06.653119   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:06.653068   23402 retry.go:31] will retry after 2.762441656s: waiting for machine to come up
	I1202 11:46:09.418859   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:09.419194   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find current IP address of domain ha-604935 in network mk-ha-604935
	I1202 11:46:09.419220   23379 main.go:141] libmachine: (ha-604935) DBG | I1202 11:46:09.419149   23402 retry.go:31] will retry after 3.896839408s: waiting for machine to come up
	I1202 11:46:13.318223   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:13.318677   23379 main.go:141] libmachine: (ha-604935) Found IP for machine: 192.168.39.102
	I1202 11:46:13.318696   23379 main.go:141] libmachine: (ha-604935) Reserving static IP address...
	I1202 11:46:13.318709   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has current primary IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:13.318957   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find host DHCP lease matching {name: "ha-604935", mac: "52:54:00:e0:fa:7c", ip: "192.168.39.102"} in network mk-ha-604935
	I1202 11:46:13.386650   23379 main.go:141] libmachine: (ha-604935) DBG | Getting to WaitForSSH function...
	I1202 11:46:13.386676   23379 main.go:141] libmachine: (ha-604935) Reserved static IP address: 192.168.39.102
	I1202 11:46:13.386705   23379 main.go:141] libmachine: (ha-604935) Waiting for SSH to be available...
	I1202 11:46:13.389178   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:13.389540   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935
	I1202 11:46:13.389567   23379 main.go:141] libmachine: (ha-604935) DBG | unable to find defined IP address of network mk-ha-604935 interface with MAC address 52:54:00:e0:fa:7c
	I1202 11:46:13.389737   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH client type: external
	I1202 11:46:13.389771   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa (-rw-------)
	I1202 11:46:13.389833   23379 main.go:141] libmachine: (ha-604935) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:46:13.389853   23379 main.go:141] libmachine: (ha-604935) DBG | About to run SSH command:
	I1202 11:46:13.389865   23379 main.go:141] libmachine: (ha-604935) DBG | exit 0
	I1202 11:46:13.393280   23379 main.go:141] libmachine: (ha-604935) DBG | SSH cmd err, output: exit status 255: 
	I1202 11:46:13.393302   23379 main.go:141] libmachine: (ha-604935) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1202 11:46:13.393311   23379 main.go:141] libmachine: (ha-604935) DBG | command : exit 0
	I1202 11:46:13.393319   23379 main.go:141] libmachine: (ha-604935) DBG | err     : exit status 255
	I1202 11:46:13.393329   23379 main.go:141] libmachine: (ha-604935) DBG | output  : 
	I1202 11:46:16.395489   23379 main.go:141] libmachine: (ha-604935) DBG | Getting to WaitForSSH function...
	I1202 11:46:16.397696   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.398004   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.398035   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.398057   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH client type: external
	I1202 11:46:16.398092   23379 main.go:141] libmachine: (ha-604935) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa (-rw-------)
	I1202 11:46:16.398150   23379 main.go:141] libmachine: (ha-604935) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:46:16.398173   23379 main.go:141] libmachine: (ha-604935) DBG | About to run SSH command:
	I1202 11:46:16.398186   23379 main.go:141] libmachine: (ha-604935) DBG | exit 0
	I1202 11:46:16.524025   23379 main.go:141] libmachine: (ha-604935) DBG | SSH cmd err, output: <nil>: 
	I1202 11:46:16.524319   23379 main.go:141] libmachine: (ha-604935) KVM machine creation complete!
	I1202 11:46:16.524585   23379 main.go:141] libmachine: (ha-604935) Calling .GetConfigRaw
	I1202 11:46:16.525132   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:16.525296   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:16.525429   23379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:46:16.525444   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:16.526494   23379 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:46:16.526509   23379 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:46:16.526516   23379 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:46:16.526523   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.528453   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.528856   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.528879   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.529035   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.529215   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.529389   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.529537   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.529694   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.529924   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.529940   23379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:46:16.639198   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:46:16.639221   23379 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:46:16.639229   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.641755   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.642065   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.642082   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.642197   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.642389   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.642587   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.642718   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.642866   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.643032   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.643046   23379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:46:16.748649   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:46:16.748721   23379 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:46:16.748732   23379 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:46:16.748738   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:46:16.748943   23379 buildroot.go:166] provisioning hostname "ha-604935"
	I1202 11:46:16.748965   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:46:16.749139   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.751455   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.751828   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.751862   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.751971   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.752141   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.752285   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.752419   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.752578   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.752754   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.752769   23379 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935 && echo "ha-604935" | sudo tee /etc/hostname
	I1202 11:46:16.869057   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935
	
	I1202 11:46:16.869084   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.871187   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.871464   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.871482   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.871651   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:16.871810   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.871940   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:16.872049   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:16.872201   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:16.872396   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:16.872412   23379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:46:16.984630   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:46:16.984655   23379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:46:16.984684   23379 buildroot.go:174] setting up certificates
	I1202 11:46:16.984696   23379 provision.go:84] configureAuth start
	I1202 11:46:16.984709   23379 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:46:16.984946   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:16.987426   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.987732   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.987755   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.987901   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:16.989843   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.990098   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:16.990122   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:16.990257   23379 provision.go:143] copyHostCerts
	I1202 11:46:16.990285   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:46:16.990325   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:46:16.990334   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:46:16.990403   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:46:16.990485   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:46:16.990508   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:46:16.990522   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:46:16.990547   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:46:16.990600   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:46:16.990616   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:46:16.990622   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:46:16.990641   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:46:16.990697   23379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935 san=[127.0.0.1 192.168.39.102 ha-604935 localhost minikube]
	I1202 11:46:17.091711   23379 provision.go:177] copyRemoteCerts
	I1202 11:46:17.091762   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:46:17.091783   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.093867   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.094147   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.094176   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.094310   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.094467   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.094595   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.094701   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.178212   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:46:17.178264   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:46:17.201820   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:46:17.201876   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:46:17.224492   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:46:17.224550   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1202 11:46:17.246969   23379 provision.go:87] duration metric: took 262.263543ms to configureAuth
	I1202 11:46:17.246987   23379 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:46:17.247165   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:46:17.247239   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.249583   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.249877   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.249899   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.250032   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.250183   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.250315   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.250423   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.250529   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:17.250670   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:17.250686   23379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:46:17.469650   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:46:17.469676   23379 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:46:17.469685   23379 main.go:141] libmachine: (ha-604935) Calling .GetURL
	I1202 11:46:17.470859   23379 main.go:141] libmachine: (ha-604935) DBG | Using libvirt version 6000000
	I1202 11:46:17.472792   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.473049   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.473078   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.473161   23379 main.go:141] libmachine: Docker is up and running!
	I1202 11:46:17.473172   23379 main.go:141] libmachine: Reticulating splines...
	I1202 11:46:17.473179   23379 client.go:171] duration metric: took 25.91325953s to LocalClient.Create
	I1202 11:46:17.473201   23379 start.go:167] duration metric: took 25.913314916s to libmachine.API.Create "ha-604935"
	I1202 11:46:17.473214   23379 start.go:293] postStartSetup for "ha-604935" (driver="kvm2")
	I1202 11:46:17.473228   23379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:46:17.473243   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.473431   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:46:17.473460   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.475686   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.475977   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.476003   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.476117   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.476292   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.476424   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.476570   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.558504   23379 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:46:17.562731   23379 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:46:17.562753   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:46:17.562801   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:46:17.562870   23379 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:46:17.562886   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:46:17.562973   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:46:17.572589   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:46:17.596338   23379 start.go:296] duration metric: took 123.108175ms for postStartSetup
	I1202 11:46:17.596385   23379 main.go:141] libmachine: (ha-604935) Calling .GetConfigRaw
	I1202 11:46:17.596933   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:17.599535   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.599863   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.599888   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.600036   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:46:17.600197   23379 start.go:128] duration metric: took 26.057419293s to createHost
	I1202 11:46:17.600216   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.602393   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.602679   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.602700   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.602888   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.603033   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.603150   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.603243   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.603351   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:46:17.603548   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:46:17.603565   23379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:46:17.708694   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733139977.687468447
	
	I1202 11:46:17.708715   23379 fix.go:216] guest clock: 1733139977.687468447
	I1202 11:46:17.708724   23379 fix.go:229] Guest: 2024-12-02 11:46:17.687468447 +0000 UTC Remote: 2024-12-02 11:46:17.600208028 +0000 UTC m=+26.158965969 (delta=87.260419ms)
	I1202 11:46:17.708747   23379 fix.go:200] guest clock delta is within tolerance: 87.260419ms
	I1202 11:46:17.708757   23379 start.go:83] releasing machines lock for "ha-604935", held for 26.166055586s
	I1202 11:46:17.708779   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.708992   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:17.711541   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.711821   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.711843   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.711972   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.712458   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.712646   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:17.712736   23379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:46:17.712776   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.712829   23379 ssh_runner.go:195] Run: cat /version.json
	I1202 11:46:17.712853   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:17.715060   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.715759   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.715798   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.715960   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.716014   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.716187   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.716313   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:17.716339   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:17.716347   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.716430   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:17.716502   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.716582   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:17.716706   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:17.716827   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:17.792614   23379 ssh_runner.go:195] Run: systemctl --version
	I1202 11:46:17.813470   23379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:46:17.973535   23379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:46:17.979920   23379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:46:17.979975   23379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:46:17.995437   23379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:46:17.995459   23379 start.go:495] detecting cgroup driver to use...
	I1202 11:46:17.995503   23379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:46:18.012152   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:46:18.026749   23379 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:46:18.026813   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:46:18.040895   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:46:18.054867   23379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:46:18.182673   23379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:46:18.307537   23379 docker.go:233] disabling docker service ...
	I1202 11:46:18.307608   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:46:18.321854   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:46:18.334016   23379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:46:18.463785   23379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:46:18.581750   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:46:18.594915   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:46:18.612956   23379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:46:18.613013   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.623443   23379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:46:18.623494   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.633789   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.643912   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.654023   23379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:46:18.664581   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.674994   23379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.691561   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:46:18.701797   23379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:46:18.711042   23379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:46:18.711090   23379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:46:18.724638   23379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:46:18.733743   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:46:18.862034   23379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:46:18.949557   23379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:46:18.949630   23379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:46:18.954402   23379 start.go:563] Will wait 60s for crictl version
	I1202 11:46:18.954482   23379 ssh_runner.go:195] Run: which crictl
	I1202 11:46:18.958128   23379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:46:18.997454   23379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:46:18.997519   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:46:19.025104   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:46:19.055599   23379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:46:19.056875   23379 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:46:19.059223   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:19.059530   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:19.059555   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:19.059704   23379 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:46:19.063855   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:46:19.078703   23379 kubeadm.go:883] updating cluster {Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 11:46:19.078793   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:46:19.078828   23379 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:46:19.116305   23379 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 11:46:19.116376   23379 ssh_runner.go:195] Run: which lz4
	I1202 11:46:19.120271   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1202 11:46:19.120778   23379 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 11:46:19.126218   23379 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 11:46:19.126239   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 11:46:20.425373   23379 crio.go:462] duration metric: took 1.305048201s to copy over tarball
	I1202 11:46:20.425452   23379 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 11:46:22.441192   23379 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.01571139s)
	I1202 11:46:22.441225   23379 crio.go:469] duration metric: took 2.015821089s to extract the tarball
	I1202 11:46:22.441233   23379 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 11:46:22.478991   23379 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:46:22.530052   23379 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:46:22.530074   23379 cache_images.go:84] Images are preloaded, skipping loading
	I1202 11:46:22.530083   23379 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.2 crio true true} ...
	I1202 11:46:22.530186   23379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:46:22.530263   23379 ssh_runner.go:195] Run: crio config
	I1202 11:46:22.572985   23379 cni.go:84] Creating CNI manager for ""
	I1202 11:46:22.573005   23379 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1202 11:46:22.573014   23379 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 11:46:22.573034   23379 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-604935 NodeName:ha-604935 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 11:46:22.573152   23379 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-604935"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.102"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 11:46:22.573183   23379 kube-vip.go:115] generating kube-vip config ...
	I1202 11:46:22.573233   23379 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:46:22.589221   23379 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:46:22.589338   23379 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:46:22.589405   23379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:46:22.599190   23379 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:46:22.599242   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 11:46:22.608607   23379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1202 11:46:22.624652   23379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:46:22.640379   23379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1202 11:46:22.655900   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1202 11:46:22.671590   23379 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:46:22.675287   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:46:22.687449   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:46:22.815343   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:46:22.830770   23379 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.102
	I1202 11:46:22.830783   23379 certs.go:194] generating shared ca certs ...
	I1202 11:46:22.830798   23379 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:22.830938   23379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:46:22.830989   23379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:46:22.831001   23379 certs.go:256] generating profile certs ...
	I1202 11:46:22.831074   23379 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:46:22.831100   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt with IP's: []
	I1202 11:46:22.963911   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt ...
	I1202 11:46:22.963935   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt: {Name:mk5750a5db627315b9b01ec40b88a97f880b8d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:22.964093   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key ...
	I1202 11:46:22.964105   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key: {Name:mk12b4799c6c082b6ae6dcb6d50922caccda6be2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:22.964176   23379 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd
	I1202 11:46:22.964216   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.254]
	I1202 11:46:23.245751   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd ...
	I1202 11:46:23.245777   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd: {Name:mkd02d0517ee36862fb48fa866d0eddc37aac5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.245919   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd ...
	I1202 11:46:23.245934   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd: {Name:mkafae41baf5ffd85374c686e8a6a230d6cd62ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.246014   23379 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.ff277fbd -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:46:23.246102   23379 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.ff277fbd -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:46:23.246163   23379 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:46:23.246178   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt with IP's: []
	I1202 11:46:23.398901   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt ...
	I1202 11:46:23.398937   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt: {Name:mk59ab7004f92d658850310a3f6a84461f824e18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.399105   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key ...
	I1202 11:46:23.399117   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key: {Name:mk4341731ba8ea8693d50dafd7cfc413608c74fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:23.399195   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:46:23.399214   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:46:23.399232   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:46:23.399248   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:46:23.399263   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:46:23.399278   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:46:23.399293   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:46:23.399307   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:46:23.399357   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:46:23.399393   23379 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:46:23.399404   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:46:23.399426   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:46:23.399453   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:46:23.399485   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:46:23.399528   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:46:23.399560   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.399576   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.399590   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.400135   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:46:23.425287   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:46:23.447899   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:46:23.470786   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:46:23.493867   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 11:46:23.517308   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 11:46:23.540273   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:46:23.562862   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:46:23.587751   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:46:23.615307   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:46:23.645819   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:46:23.670226   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 11:46:23.686120   23379 ssh_runner.go:195] Run: openssl version
	I1202 11:46:23.691724   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:46:23.702611   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.706991   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.707032   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:46:23.712771   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:46:23.723671   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:46:23.734402   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.738713   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.738746   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:46:23.744060   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:46:23.754804   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:46:23.765363   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.769594   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.769630   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:46:23.774953   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:46:23.785412   23379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:46:23.789341   23379 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:46:23.789402   23379 kubeadm.go:392] StartCluster: {Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:46:23.789461   23379 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 11:46:23.789507   23379 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 11:46:23.829185   23379 cri.go:89] found id: ""
	I1202 11:46:23.829258   23379 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 11:46:23.839482   23379 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 11:46:23.849018   23379 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 11:46:23.858723   23379 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 11:46:23.858741   23379 kubeadm.go:157] found existing configuration files:
	
	I1202 11:46:23.858784   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 11:46:23.867813   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 11:46:23.867858   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 11:46:23.877083   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 11:46:23.886137   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 11:46:23.886182   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 11:46:23.895526   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 11:46:23.904513   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 11:46:23.904574   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 11:46:23.913938   23379 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 11:46:23.922913   23379 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 11:46:23.922950   23379 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 11:46:23.932249   23379 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 11:46:24.043553   23379 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 11:46:24.043623   23379 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 11:46:24.150207   23379 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 11:46:24.150352   23379 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 11:46:24.150497   23379 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 11:46:24.159626   23379 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 11:46:24.195667   23379 out.go:235]   - Generating certificates and keys ...
	I1202 11:46:24.195776   23379 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 11:46:24.195834   23379 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 11:46:24.358436   23379 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 11:46:24.683719   23379 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 11:46:24.943667   23379 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 11:46:25.032560   23379 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 11:46:25.140726   23379 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 11:46:25.140883   23379 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-604935 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1202 11:46:25.414720   23379 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 11:46:25.414972   23379 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-604935 localhost] and IPs [192.168.39.102 127.0.0.1 ::1]
	I1202 11:46:25.596308   23379 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 11:46:25.682848   23379 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 11:46:25.908682   23379 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 11:46:25.908968   23379 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 11:46:26.057865   23379 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 11:46:26.190529   23379 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 11:46:26.320151   23379 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 11:46:26.522118   23379 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 11:46:26.687579   23379 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 11:46:26.688353   23379 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 11:46:26.693709   23379 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 11:46:26.695397   23379 out.go:235]   - Booting up control plane ...
	I1202 11:46:26.695494   23379 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 11:46:26.695563   23379 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 11:46:26.696118   23379 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 11:46:26.712309   23379 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 11:46:26.721469   23379 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 11:46:26.721525   23379 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 11:46:26.849672   23379 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 11:46:26.849831   23379 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 11:46:27.850918   23379 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001821143s
	I1202 11:46:27.850997   23379 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 11:46:33.482873   23379 kubeadm.go:310] [api-check] The API server is healthy after 5.633037057s
	I1202 11:46:33.492749   23379 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 11:46:33.512336   23379 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 11:46:34.037238   23379 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 11:46:34.037452   23379 kubeadm.go:310] [mark-control-plane] Marking the node ha-604935 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 11:46:34.050856   23379 kubeadm.go:310] [bootstrap-token] Using token: 8kw29b.di3rsap6xz9ot94t
	I1202 11:46:34.052035   23379 out.go:235]   - Configuring RBAC rules ...
	I1202 11:46:34.052182   23379 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 11:46:34.058440   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 11:46:34.073861   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 11:46:34.076499   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 11:46:34.079628   23379 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 11:46:34.084760   23379 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 11:46:34.097556   23379 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 11:46:34.326607   23379 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 11:46:34.887901   23379 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 11:46:34.889036   23379 kubeadm.go:310] 
	I1202 11:46:34.889140   23379 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 11:46:34.889169   23379 kubeadm.go:310] 
	I1202 11:46:34.889273   23379 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 11:46:34.889281   23379 kubeadm.go:310] 
	I1202 11:46:34.889308   23379 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 11:46:34.889389   23379 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 11:46:34.889465   23379 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 11:46:34.889475   23379 kubeadm.go:310] 
	I1202 11:46:34.889554   23379 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 11:46:34.889564   23379 kubeadm.go:310] 
	I1202 11:46:34.889639   23379 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 11:46:34.889649   23379 kubeadm.go:310] 
	I1202 11:46:34.889720   23379 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 11:46:34.889845   23379 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 11:46:34.889909   23379 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 11:46:34.889916   23379 kubeadm.go:310] 
	I1202 11:46:34.889990   23379 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 11:46:34.890073   23379 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 11:46:34.890084   23379 kubeadm.go:310] 
	I1202 11:46:34.890170   23379 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8kw29b.di3rsap6xz9ot94t \
	I1202 11:46:34.890282   23379 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 11:46:34.890321   23379 kubeadm.go:310] 	--control-plane 
	I1202 11:46:34.890328   23379 kubeadm.go:310] 
	I1202 11:46:34.890409   23379 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 11:46:34.890416   23379 kubeadm.go:310] 
	I1202 11:46:34.890483   23379 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8kw29b.di3rsap6xz9ot94t \
	I1202 11:46:34.890568   23379 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 11:46:34.891577   23379 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 11:46:34.891597   23379 cni.go:84] Creating CNI manager for ""
	I1202 11:46:34.891603   23379 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1202 11:46:34.892960   23379 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1202 11:46:34.893988   23379 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 11:46:34.899231   23379 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1202 11:46:34.899255   23379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 11:46:34.917969   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 11:46:35.272118   23379 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 11:46:35.272198   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:35.272259   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-604935 minikube.k8s.io/updated_at=2024_12_02T11_46_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=ha-604935 minikube.k8s.io/primary=true
	I1202 11:46:35.310028   23379 ops.go:34] apiserver oom_adj: -16
	I1202 11:46:35.408095   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:35.908268   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:36.408944   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:36.909158   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:37.408454   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:37.909038   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:38.408700   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:38.908314   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:46:39.023834   23379 kubeadm.go:1113] duration metric: took 3.751689624s to wait for elevateKubeSystemPrivileges
	I1202 11:46:39.023871   23379 kubeadm.go:394] duration metric: took 15.234471878s to StartCluster
	I1202 11:46:39.023890   23379 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:39.023968   23379 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:46:39.024843   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:46:39.025096   23379 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:46:39.025129   23379 start.go:241] waiting for startup goroutines ...
	I1202 11:46:39.025139   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 11:46:39.025146   23379 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 11:46:39.025247   23379 addons.go:69] Setting storage-provisioner=true in profile "ha-604935"
	I1202 11:46:39.025268   23379 addons.go:234] Setting addon storage-provisioner=true in "ha-604935"
	I1202 11:46:39.025297   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:46:39.025365   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:46:39.025267   23379 addons.go:69] Setting default-storageclass=true in profile "ha-604935"
	I1202 11:46:39.025420   23379 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-604935"
	I1202 11:46:39.025726   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.025773   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.025867   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.025904   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.040510   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I1202 11:46:39.040567   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I1202 11:46:39.041007   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.041111   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.041500   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.041519   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.041642   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.041669   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.041855   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.042005   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.042156   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:39.042501   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.042547   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.044200   23379 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:46:39.044508   23379 kapi.go:59] client config for ha-604935: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 11:46:39.044954   23379 cert_rotation.go:140] Starting client certificate rotation controller
	I1202 11:46:39.045176   23379 addons.go:234] Setting addon default-storageclass=true in "ha-604935"
	I1202 11:46:39.045212   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:46:39.045509   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.045548   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.056740   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I1202 11:46:39.057180   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.057736   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.057761   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.058043   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.058254   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:39.059103   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I1202 11:46:39.059506   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.059989   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.060003   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.060030   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:39.060305   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.060780   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.060821   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.061507   23379 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 11:46:39.062672   23379 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:46:39.062687   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 11:46:39.062700   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:39.065792   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.066230   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:39.066257   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.066378   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:39.066549   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:39.066694   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:39.066850   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:39.076289   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32769
	I1202 11:46:39.076690   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.077099   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.077122   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.077418   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.077579   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:46:39.079081   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:46:39.079273   23379 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 11:46:39.079287   23379 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 11:46:39.079300   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:46:39.082143   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.082579   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:46:39.082597   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:46:39.082752   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:46:39.082910   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:46:39.083074   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:46:39.083219   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:46:39.138927   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 11:46:39.202502   23379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:46:39.264780   23379 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 11:46:39.722155   23379 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1202 11:46:39.944980   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945000   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945116   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945141   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945269   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945284   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945292   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945298   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945459   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945489   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945500   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.945513   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.945457   23379 main.go:141] libmachine: (ha-604935) DBG | Closing plugin on server side
	I1202 11:46:39.945578   23379 main.go:141] libmachine: (ha-604935) DBG | Closing plugin on server side
	I1202 11:46:39.945581   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945620   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945796   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.945844   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.945933   23379 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 11:46:39.945977   23379 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 11:46:39.945813   23379 main.go:141] libmachine: (ha-604935) DBG | Closing plugin on server side
	I1202 11:46:39.946087   23379 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1202 11:46:39.946099   23379 round_trippers.go:469] Request Headers:
	I1202 11:46:39.946109   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:46:39.946117   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:46:39.963939   23379 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1202 11:46:39.964651   23379 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1202 11:46:39.964667   23379 round_trippers.go:469] Request Headers:
	I1202 11:46:39.964677   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:46:39.964684   23379 round_trippers.go:473]     Content-Type: application/json
	I1202 11:46:39.964689   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:46:39.968484   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:46:39.968627   23379 main.go:141] libmachine: Making call to close driver server
	I1202 11:46:39.968639   23379 main.go:141] libmachine: (ha-604935) Calling .Close
	I1202 11:46:39.968886   23379 main.go:141] libmachine: Successfully made call to close driver server
	I1202 11:46:39.968902   23379 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 11:46:39.970238   23379 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1202 11:46:39.971383   23379 addons.go:510] duration metric: took 946.244666ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 11:46:39.971420   23379 start.go:246] waiting for cluster config update ...
	I1202 11:46:39.971435   23379 start.go:255] writing updated cluster config ...
	I1202 11:46:39.972900   23379 out.go:201] 
	I1202 11:46:39.974083   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:46:39.974147   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:46:39.975564   23379 out.go:177] * Starting "ha-604935-m02" control-plane node in "ha-604935" cluster
	I1202 11:46:39.976682   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:46:39.976701   23379 cache.go:56] Caching tarball of preloaded images
	I1202 11:46:39.976788   23379 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:46:39.976800   23379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:46:39.976872   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:46:39.977100   23379 start.go:360] acquireMachinesLock for ha-604935-m02: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:46:39.977152   23379 start.go:364] duration metric: took 22.26µs to acquireMachinesLock for "ha-604935-m02"
	I1202 11:46:39.977175   23379 start.go:93] Provisioning new machine with config: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:46:39.977250   23379 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1202 11:46:39.978689   23379 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 11:46:39.978765   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:46:39.978800   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:46:39.993356   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39439
	I1202 11:46:39.993775   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:46:39.994235   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:46:39.994266   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:46:39.994666   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:46:39.994881   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:46:39.995033   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:46:39.995225   23379 start.go:159] libmachine.API.Create for "ha-604935" (driver="kvm2")
	I1202 11:46:39.995256   23379 client.go:168] LocalClient.Create starting
	I1202 11:46:39.995293   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:46:39.995339   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:46:39.995364   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:46:39.995433   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:46:39.995460   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:46:39.995482   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:46:39.995508   23379 main.go:141] libmachine: Running pre-create checks...
	I1202 11:46:39.995520   23379 main.go:141] libmachine: (ha-604935-m02) Calling .PreCreateCheck
	I1202 11:46:39.995688   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetConfigRaw
	I1202 11:46:39.996035   23379 main.go:141] libmachine: Creating machine...
	I1202 11:46:39.996049   23379 main.go:141] libmachine: (ha-604935-m02) Calling .Create
	I1202 11:46:39.996158   23379 main.go:141] libmachine: (ha-604935-m02) Creating KVM machine...
	I1202 11:46:39.997515   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found existing default KVM network
	I1202 11:46:39.997667   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found existing private KVM network mk-ha-604935
	I1202 11:46:39.997862   23379 main.go:141] libmachine: (ha-604935-m02) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02 ...
	I1202 11:46:39.997894   23379 main.go:141] libmachine: (ha-604935-m02) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:46:39.997973   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:39.997863   23734 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:46:39.998066   23379 main.go:141] libmachine: (ha-604935-m02) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:46:40.246601   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:40.246459   23734 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa...
	I1202 11:46:40.345704   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:40.345606   23734 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/ha-604935-m02.rawdisk...
	I1202 11:46:40.345732   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Writing magic tar header
	I1202 11:46:40.345746   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Writing SSH key tar header
	I1202 11:46:40.345760   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:40.345732   23734 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02 ...
	I1202 11:46:40.345873   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02
	I1202 11:46:40.345899   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:46:40.345912   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02 (perms=drwx------)
	I1202 11:46:40.345936   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:46:40.345967   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:46:40.345981   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:46:40.345991   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:46:40.346001   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:46:40.346014   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Checking permissions on dir: /home
	I1202 11:46:40.346025   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Skipping /home - not owner
	I1202 11:46:40.346072   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:46:40.346108   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:46:40.346124   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:46:40.346137   23379 main.go:141] libmachine: (ha-604935-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:46:40.346162   23379 main.go:141] libmachine: (ha-604935-m02) Creating domain...
	I1202 11:46:40.346895   23379 main.go:141] libmachine: (ha-604935-m02) define libvirt domain using xml: 
	I1202 11:46:40.346916   23379 main.go:141] libmachine: (ha-604935-m02) <domain type='kvm'>
	I1202 11:46:40.346942   23379 main.go:141] libmachine: (ha-604935-m02)   <name>ha-604935-m02</name>
	I1202 11:46:40.346957   23379 main.go:141] libmachine: (ha-604935-m02)   <memory unit='MiB'>2200</memory>
	I1202 11:46:40.346974   23379 main.go:141] libmachine: (ha-604935-m02)   <vcpu>2</vcpu>
	I1202 11:46:40.346979   23379 main.go:141] libmachine: (ha-604935-m02)   <features>
	I1202 11:46:40.346986   23379 main.go:141] libmachine: (ha-604935-m02)     <acpi/>
	I1202 11:46:40.346990   23379 main.go:141] libmachine: (ha-604935-m02)     <apic/>
	I1202 11:46:40.346995   23379 main.go:141] libmachine: (ha-604935-m02)     <pae/>
	I1202 11:46:40.347001   23379 main.go:141] libmachine: (ha-604935-m02)     
	I1202 11:46:40.347008   23379 main.go:141] libmachine: (ha-604935-m02)   </features>
	I1202 11:46:40.347027   23379 main.go:141] libmachine: (ha-604935-m02)   <cpu mode='host-passthrough'>
	I1202 11:46:40.347034   23379 main.go:141] libmachine: (ha-604935-m02)   
	I1202 11:46:40.347038   23379 main.go:141] libmachine: (ha-604935-m02)   </cpu>
	I1202 11:46:40.347043   23379 main.go:141] libmachine: (ha-604935-m02)   <os>
	I1202 11:46:40.347049   23379 main.go:141] libmachine: (ha-604935-m02)     <type>hvm</type>
	I1202 11:46:40.347054   23379 main.go:141] libmachine: (ha-604935-m02)     <boot dev='cdrom'/>
	I1202 11:46:40.347060   23379 main.go:141] libmachine: (ha-604935-m02)     <boot dev='hd'/>
	I1202 11:46:40.347066   23379 main.go:141] libmachine: (ha-604935-m02)     <bootmenu enable='no'/>
	I1202 11:46:40.347072   23379 main.go:141] libmachine: (ha-604935-m02)   </os>
	I1202 11:46:40.347077   23379 main.go:141] libmachine: (ha-604935-m02)   <devices>
	I1202 11:46:40.347082   23379 main.go:141] libmachine: (ha-604935-m02)     <disk type='file' device='cdrom'>
	I1202 11:46:40.347089   23379 main.go:141] libmachine: (ha-604935-m02)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/boot2docker.iso'/>
	I1202 11:46:40.347096   23379 main.go:141] libmachine: (ha-604935-m02)       <target dev='hdc' bus='scsi'/>
	I1202 11:46:40.347101   23379 main.go:141] libmachine: (ha-604935-m02)       <readonly/>
	I1202 11:46:40.347105   23379 main.go:141] libmachine: (ha-604935-m02)     </disk>
	I1202 11:46:40.347111   23379 main.go:141] libmachine: (ha-604935-m02)     <disk type='file' device='disk'>
	I1202 11:46:40.347118   23379 main.go:141] libmachine: (ha-604935-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:46:40.347128   23379 main.go:141] libmachine: (ha-604935-m02)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/ha-604935-m02.rawdisk'/>
	I1202 11:46:40.347135   23379 main.go:141] libmachine: (ha-604935-m02)       <target dev='hda' bus='virtio'/>
	I1202 11:46:40.347140   23379 main.go:141] libmachine: (ha-604935-m02)     </disk>
	I1202 11:46:40.347144   23379 main.go:141] libmachine: (ha-604935-m02)     <interface type='network'>
	I1202 11:46:40.347152   23379 main.go:141] libmachine: (ha-604935-m02)       <source network='mk-ha-604935'/>
	I1202 11:46:40.347156   23379 main.go:141] libmachine: (ha-604935-m02)       <model type='virtio'/>
	I1202 11:46:40.347162   23379 main.go:141] libmachine: (ha-604935-m02)     </interface>
	I1202 11:46:40.347167   23379 main.go:141] libmachine: (ha-604935-m02)     <interface type='network'>
	I1202 11:46:40.347172   23379 main.go:141] libmachine: (ha-604935-m02)       <source network='default'/>
	I1202 11:46:40.347178   23379 main.go:141] libmachine: (ha-604935-m02)       <model type='virtio'/>
	I1202 11:46:40.347183   23379 main.go:141] libmachine: (ha-604935-m02)     </interface>
	I1202 11:46:40.347187   23379 main.go:141] libmachine: (ha-604935-m02)     <serial type='pty'>
	I1202 11:46:40.347194   23379 main.go:141] libmachine: (ha-604935-m02)       <target port='0'/>
	I1202 11:46:40.347204   23379 main.go:141] libmachine: (ha-604935-m02)     </serial>
	I1202 11:46:40.347211   23379 main.go:141] libmachine: (ha-604935-m02)     <console type='pty'>
	I1202 11:46:40.347221   23379 main.go:141] libmachine: (ha-604935-m02)       <target type='serial' port='0'/>
	I1202 11:46:40.347236   23379 main.go:141] libmachine: (ha-604935-m02)     </console>
	I1202 11:46:40.347247   23379 main.go:141] libmachine: (ha-604935-m02)     <rng model='virtio'>
	I1202 11:46:40.347255   23379 main.go:141] libmachine: (ha-604935-m02)       <backend model='random'>/dev/random</backend>
	I1202 11:46:40.347264   23379 main.go:141] libmachine: (ha-604935-m02)     </rng>
	I1202 11:46:40.347271   23379 main.go:141] libmachine: (ha-604935-m02)     
	I1202 11:46:40.347282   23379 main.go:141] libmachine: (ha-604935-m02)     
	I1202 11:46:40.347295   23379 main.go:141] libmachine: (ha-604935-m02)   </devices>
	I1202 11:46:40.347306   23379 main.go:141] libmachine: (ha-604935-m02) </domain>
	I1202 11:46:40.347319   23379 main.go:141] libmachine: (ha-604935-m02) 
	I1202 11:46:40.353726   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:2b:bd:2e in network default
	I1202 11:46:40.354276   23379 main.go:141] libmachine: (ha-604935-m02) Ensuring networks are active...
	I1202 11:46:40.354296   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:40.355011   23379 main.go:141] libmachine: (ha-604935-m02) Ensuring network default is active
	I1202 11:46:40.355333   23379 main.go:141] libmachine: (ha-604935-m02) Ensuring network mk-ha-604935 is active
	I1202 11:46:40.355771   23379 main.go:141] libmachine: (ha-604935-m02) Getting domain xml...
	I1202 11:46:40.356531   23379 main.go:141] libmachine: (ha-604935-m02) Creating domain...
	I1202 11:46:41.552192   23379 main.go:141] libmachine: (ha-604935-m02) Waiting to get IP...
	I1202 11:46:41.552923   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:41.553342   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:41.553365   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:41.553311   23734 retry.go:31] will retry after 250.26239ms: waiting for machine to come up
	I1202 11:46:41.804774   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:41.805224   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:41.805252   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:41.805182   23734 retry.go:31] will retry after 337.906383ms: waiting for machine to come up
	I1202 11:46:42.144697   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:42.145141   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:42.145174   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:42.145097   23734 retry.go:31] will retry after 345.416251ms: waiting for machine to come up
	I1202 11:46:42.491650   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:42.492205   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:42.492269   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:42.492187   23734 retry.go:31] will retry after 576.231118ms: waiting for machine to come up
	I1202 11:46:43.069832   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:43.070232   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:43.070258   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:43.070185   23734 retry.go:31] will retry after 484.637024ms: waiting for machine to come up
	I1202 11:46:43.557338   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:43.557918   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:43.557945   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:43.557876   23734 retry.go:31] will retry after 878.448741ms: waiting for machine to come up
	I1202 11:46:44.437501   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:44.437938   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:44.437963   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:44.437910   23734 retry.go:31] will retry after 1.136235758s: waiting for machine to come up
	I1202 11:46:45.575985   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:45.576450   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:45.576493   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:45.576415   23734 retry.go:31] will retry after 1.136366132s: waiting for machine to come up
	I1202 11:46:46.714826   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:46.715252   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:46.715280   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:46.715201   23734 retry.go:31] will retry after 1.737559308s: waiting for machine to come up
	I1202 11:46:48.455006   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:48.455487   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:48.455517   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:48.455436   23734 retry.go:31] will retry after 1.586005802s: waiting for machine to come up
	I1202 11:46:50.042947   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:50.043522   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:50.043548   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:50.043471   23734 retry.go:31] will retry after 1.94342421s: waiting for machine to come up
	I1202 11:46:51.988099   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:51.988615   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:51.988639   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:51.988575   23734 retry.go:31] will retry after 3.527601684s: waiting for machine to come up
	I1202 11:46:55.517564   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:55.518092   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:55.518121   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:55.518041   23734 retry.go:31] will retry after 3.578241105s: waiting for machine to come up
	I1202 11:46:59.097310   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:46:59.097631   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find current IP address of domain ha-604935-m02 in network mk-ha-604935
	I1202 11:46:59.097651   23379 main.go:141] libmachine: (ha-604935-m02) DBG | I1202 11:46:59.097596   23734 retry.go:31] will retry after 5.085934719s: waiting for machine to come up
	I1202 11:47:04.187907   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:04.188401   23379 main.go:141] libmachine: (ha-604935-m02) Found IP for machine: 192.168.39.96
	I1202 11:47:04.188429   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has current primary IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:04.188437   23379 main.go:141] libmachine: (ha-604935-m02) Reserving static IP address...
	I1202 11:47:04.188743   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find host DHCP lease matching {name: "ha-604935-m02", mac: "52:54:00:42:3a:28", ip: "192.168.39.96"} in network mk-ha-604935
	I1202 11:47:04.256531   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Getting to WaitForSSH function...
	I1202 11:47:04.256562   23379 main.go:141] libmachine: (ha-604935-m02) Reserved static IP address: 192.168.39.96
	I1202 11:47:04.256575   23379 main.go:141] libmachine: (ha-604935-m02) Waiting for SSH to be available...
	I1202 11:47:04.258823   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:04.259113   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935
	I1202 11:47:04.259157   23379 main.go:141] libmachine: (ha-604935-m02) DBG | unable to find defined IP address of network mk-ha-604935 interface with MAC address 52:54:00:42:3a:28
	I1202 11:47:04.259288   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH client type: external
	I1202 11:47:04.259308   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa (-rw-------)
	I1202 11:47:04.259373   23379 main.go:141] libmachine: (ha-604935-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:47:04.259397   23379 main.go:141] libmachine: (ha-604935-m02) DBG | About to run SSH command:
	I1202 11:47:04.259411   23379 main.go:141] libmachine: (ha-604935-m02) DBG | exit 0
	I1202 11:47:04.263986   23379 main.go:141] libmachine: (ha-604935-m02) DBG | SSH cmd err, output: exit status 255: 
	I1202 11:47:04.264009   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1202 11:47:04.264016   23379 main.go:141] libmachine: (ha-604935-m02) DBG | command : exit 0
	I1202 11:47:04.264041   23379 main.go:141] libmachine: (ha-604935-m02) DBG | err     : exit status 255
	I1202 11:47:04.264051   23379 main.go:141] libmachine: (ha-604935-m02) DBG | output  : 
	I1202 11:47:07.264654   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Getting to WaitForSSH function...
	I1202 11:47:07.266849   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.267221   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.267249   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.267406   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH client type: external
	I1202 11:47:07.267434   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa (-rw-------)
	I1202 11:47:07.267472   23379 main.go:141] libmachine: (ha-604935-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:47:07.267495   23379 main.go:141] libmachine: (ha-604935-m02) DBG | About to run SSH command:
	I1202 11:47:07.267507   23379 main.go:141] libmachine: (ha-604935-m02) DBG | exit 0
	I1202 11:47:07.391931   23379 main.go:141] libmachine: (ha-604935-m02) DBG | SSH cmd err, output: <nil>: 
	I1202 11:47:07.392120   23379 main.go:141] libmachine: (ha-604935-m02) KVM machine creation complete!
	I1202 11:47:07.392498   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetConfigRaw
	I1202 11:47:07.393039   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:07.393215   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:07.393337   23379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:47:07.393354   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetState
	I1202 11:47:07.394565   23379 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:47:07.394578   23379 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:47:07.394584   23379 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:47:07.394589   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.396709   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.397006   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.397033   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.397522   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.398890   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.399081   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.399216   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.399356   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.399544   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.399555   23379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:47:07.503380   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:47:07.503409   23379 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:47:07.503420   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.506083   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.506469   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.506502   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.506641   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.506811   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.506958   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.507087   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.507236   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.507398   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.507407   23379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:47:07.612741   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:47:07.612843   23379 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:47:07.612858   23379 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:47:07.612872   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:47:07.613105   23379 buildroot.go:166] provisioning hostname "ha-604935-m02"
	I1202 11:47:07.613126   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:47:07.613280   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.615682   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.616001   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.616029   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.616193   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.616355   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.616496   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.616615   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.616752   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.616925   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.616942   23379 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935-m02 && echo "ha-604935-m02" | sudo tee /etc/hostname
	I1202 11:47:07.739596   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935-m02
	
	I1202 11:47:07.739622   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.742125   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.742500   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.742532   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.742709   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:07.742872   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.743043   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:07.743173   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:07.743334   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:07.743539   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:07.743561   23379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:47:07.857236   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:47:07.857259   23379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:47:07.857284   23379 buildroot.go:174] setting up certificates
	I1202 11:47:07.857292   23379 provision.go:84] configureAuth start
	I1202 11:47:07.857300   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetMachineName
	I1202 11:47:07.857527   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:07.860095   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.860513   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.860543   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.860692   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:07.862585   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.862958   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:07.862988   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:07.863114   23379 provision.go:143] copyHostCerts
	I1202 11:47:07.863150   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:47:07.863186   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:47:07.863197   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:47:07.863272   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:47:07.863374   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:47:07.863401   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:47:07.863412   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:47:07.863452   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:47:07.863528   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:47:07.863553   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:47:07.863563   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:47:07.863595   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:47:07.863674   23379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935-m02 san=[127.0.0.1 192.168.39.96 ha-604935-m02 localhost minikube]
	I1202 11:47:08.103724   23379 provision.go:177] copyRemoteCerts
	I1202 11:47:08.103779   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:47:08.103802   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.106490   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.106829   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.106859   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.107025   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.107200   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.107328   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.107425   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.190303   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:47:08.190378   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:47:08.217749   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:47:08.217812   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:47:08.240576   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:47:08.240626   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:47:08.263351   23379 provision.go:87] duration metric: took 406.049409ms to configureAuth
	I1202 11:47:08.263374   23379 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:47:08.263549   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:08.263627   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.266183   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.266506   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.266542   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.266657   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.266822   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.266953   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.267045   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.267212   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:08.267440   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:08.267458   23379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:47:08.480702   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:47:08.480726   23379 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:47:08.480737   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetURL
	I1202 11:47:08.481946   23379 main.go:141] libmachine: (ha-604935-m02) DBG | Using libvirt version 6000000
	I1202 11:47:08.484074   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.484465   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.484486   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.484652   23379 main.go:141] libmachine: Docker is up and running!
	I1202 11:47:08.484665   23379 main.go:141] libmachine: Reticulating splines...
	I1202 11:47:08.484672   23379 client.go:171] duration metric: took 28.489409707s to LocalClient.Create
	I1202 11:47:08.484691   23379 start.go:167] duration metric: took 28.489467042s to libmachine.API.Create "ha-604935"
	I1202 11:47:08.484701   23379 start.go:293] postStartSetup for "ha-604935-m02" (driver="kvm2")
	I1202 11:47:08.484710   23379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:47:08.484726   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.484947   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:47:08.484979   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.487275   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.487627   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.487652   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.487763   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.487916   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.488023   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.488157   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.570418   23379 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:47:08.574644   23379 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:47:08.574668   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:47:08.574734   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:47:08.574834   23379 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:47:08.574847   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:47:08.574955   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:47:08.584296   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:47:08.607137   23379 start.go:296] duration metric: took 122.426316ms for postStartSetup
	I1202 11:47:08.607176   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetConfigRaw
	I1202 11:47:08.607688   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:08.609787   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.610122   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.610140   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.610348   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:47:08.610507   23379 start.go:128] duration metric: took 28.633177558s to createHost
	I1202 11:47:08.610528   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.612576   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.612933   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.612958   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.613094   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.613256   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.613387   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.613495   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.613675   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:47:08.613819   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.96 22 <nil> <nil>}
	I1202 11:47:08.613829   23379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:47:08.721072   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733140028.701362667
	
	I1202 11:47:08.721095   23379 fix.go:216] guest clock: 1733140028.701362667
	I1202 11:47:08.721104   23379 fix.go:229] Guest: 2024-12-02 11:47:08.701362667 +0000 UTC Remote: 2024-12-02 11:47:08.610518479 +0000 UTC m=+77.169276420 (delta=90.844188ms)
	I1202 11:47:08.721123   23379 fix.go:200] guest clock delta is within tolerance: 90.844188ms
	I1202 11:47:08.721129   23379 start.go:83] releasing machines lock for "ha-604935-m02", held for 28.743964366s
	I1202 11:47:08.721146   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.721362   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:08.723610   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.723892   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.723917   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.725920   23379 out.go:177] * Found network options:
	I1202 11:47:08.727151   23379 out.go:177]   - NO_PROXY=192.168.39.102
	W1202 11:47:08.728253   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:47:08.728295   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.728718   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.728888   23379 main.go:141] libmachine: (ha-604935-m02) Calling .DriverName
	I1202 11:47:08.728964   23379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:47:08.729018   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	W1202 11:47:08.729077   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:47:08.729140   23379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:47:08.729159   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHHostname
	I1202 11:47:08.731377   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.731690   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.731736   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.731757   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.731905   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.732089   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.732138   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:08.732161   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:08.732263   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.732335   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHPort
	I1202 11:47:08.732412   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.732482   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHKeyPath
	I1202 11:47:08.732622   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetSSHUsername
	I1202 11:47:08.732772   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m02/id_rsa Username:docker}
	I1202 11:47:08.961089   23379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:47:08.967388   23379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:47:08.967456   23379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:47:08.983898   23379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:47:08.983919   23379 start.go:495] detecting cgroup driver to use...
	I1202 11:47:08.983976   23379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:47:08.999755   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:47:09.012969   23379 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:47:09.013013   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:47:09.025774   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:47:09.038595   23379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:47:09.155525   23379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:47:09.315590   23379 docker.go:233] disabling docker service ...
	I1202 11:47:09.315645   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:47:09.329428   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:47:09.341852   23379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:47:09.455987   23379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:47:09.568119   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:47:09.581349   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:47:09.599069   23379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:47:09.599131   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.609102   23379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:47:09.609172   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.619619   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.629809   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.640881   23379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:47:09.650894   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.660662   23379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.676866   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:47:09.687794   23379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:47:09.696987   23379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:47:09.697035   23379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:47:09.709512   23379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:47:09.718617   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:47:09.833443   23379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:47:09.924039   23379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:47:09.924108   23379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:47:09.929102   23379 start.go:563] Will wait 60s for crictl version
	I1202 11:47:09.929151   23379 ssh_runner.go:195] Run: which crictl
	I1202 11:47:09.932909   23379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:47:09.970799   23379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:47:09.970857   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:47:09.997925   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:47:10.026009   23379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:47:10.027185   23379 out.go:177]   - env NO_PROXY=192.168.39.102
	I1202 11:47:10.028209   23379 main.go:141] libmachine: (ha-604935-m02) Calling .GetIP
	I1202 11:47:10.030558   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:10.030843   23379 main.go:141] libmachine: (ha-604935-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:3a:28", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:55 +0000 UTC Type:0 Mac:52:54:00:42:3a:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-604935-m02 Clientid:01:52:54:00:42:3a:28}
	I1202 11:47:10.030865   23379 main.go:141] libmachine: (ha-604935-m02) DBG | domain ha-604935-m02 has defined IP address 192.168.39.96 and MAC address 52:54:00:42:3a:28 in network mk-ha-604935
	I1202 11:47:10.031081   23379 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:47:10.034913   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:47:10.046993   23379 mustload.go:65] Loading cluster: ha-604935
	I1202 11:47:10.047168   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:10.047464   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:10.047509   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:10.061535   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39823
	I1202 11:47:10.061962   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:10.062500   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:10.062519   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:10.062832   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:10.062993   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:47:10.064396   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:47:10.064646   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:10.064674   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:10.078237   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I1202 11:47:10.078536   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:10.078918   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:10.078933   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:10.079205   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:10.079368   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:47:10.079517   23379 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.96
	I1202 11:47:10.079528   23379 certs.go:194] generating shared ca certs ...
	I1202 11:47:10.079548   23379 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:47:10.079686   23379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:47:10.079733   23379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:47:10.079746   23379 certs.go:256] generating profile certs ...
	I1202 11:47:10.079838   23379 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:47:10.079869   23379 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3
	I1202 11:47:10.079889   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.96 192.168.39.254]
	I1202 11:47:10.265166   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3 ...
	I1202 11:47:10.265189   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3: {Name:mkdd0b8b1421fc39bdc7a4c81c195bce0584f3e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:47:10.265365   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3 ...
	I1202 11:47:10.265383   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3: {Name:mk317f3cb02e9fefc92b2802c6865b7da9a08a71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:47:10.265473   23379 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.be8c86f3 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:47:10.265636   23379 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.be8c86f3 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:47:10.265813   23379 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:47:10.265832   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:47:10.265850   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:47:10.265871   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:47:10.265888   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:47:10.265904   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:47:10.265920   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:47:10.265936   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:47:10.265955   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:47:10.266021   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:47:10.266059   23379 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:47:10.266073   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:47:10.266106   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:47:10.266137   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:47:10.266166   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:47:10.266222   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:47:10.266260   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.266282   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.266301   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.266341   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:47:10.268885   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:10.269241   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:47:10.269271   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:10.269395   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:47:10.269566   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:47:10.269669   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:47:10.269777   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:47:10.344538   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 11:47:10.349538   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 11:47:10.360402   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 11:47:10.364479   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 11:47:10.374445   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 11:47:10.378811   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 11:47:10.389170   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 11:47:10.392986   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1202 11:47:10.403485   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 11:47:10.408617   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 11:47:10.418394   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 11:47:10.422245   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 11:47:10.432316   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:47:10.458960   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:47:10.483156   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:47:10.505724   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:47:10.527955   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1202 11:47:10.550812   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 11:47:10.573508   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:47:10.595760   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:47:10.618337   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:47:10.641184   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:47:10.663681   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:47:10.687678   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 11:47:10.703651   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 11:47:10.719297   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 11:47:10.734755   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1202 11:47:10.751060   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 11:47:10.767295   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 11:47:10.783201   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 11:47:10.798776   23379 ssh_runner.go:195] Run: openssl version
	I1202 11:47:10.804781   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:47:10.814853   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.819107   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.819150   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:47:10.824680   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:47:10.834444   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:47:10.847333   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.852096   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.852141   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:47:10.857456   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:47:10.867671   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:47:10.878797   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.883014   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.883050   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:47:10.888463   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:47:10.900014   23379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:47:10.903987   23379 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:47:10.904033   23379 kubeadm.go:934] updating node {m02 192.168.39.96 8443 v1.31.2 crio true true} ...
	I1202 11:47:10.904108   23379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:47:10.904143   23379 kube-vip.go:115] generating kube-vip config ...
	I1202 11:47:10.904172   23379 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:47:10.920663   23379 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:47:10.920727   23379 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:47:10.920782   23379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:47:10.929813   23379 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1202 11:47:10.929869   23379 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1202 11:47:10.938939   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1202 11:47:10.938963   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:47:10.939004   23379 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1202 11:47:10.939023   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:47:10.939098   23379 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1202 11:47:10.943516   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1202 11:47:10.943543   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1202 11:47:11.580278   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:47:11.580378   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:47:11.585380   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1202 11:47:11.585410   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1202 11:47:11.699996   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:47:11.746001   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:47:11.746098   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:47:11.755160   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1202 11:47:11.755193   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1202 11:47:12.167193   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 11:47:12.177362   23379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1202 11:47:12.193477   23379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:47:12.209277   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1202 11:47:12.225224   23379 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:47:12.229096   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:47:12.241465   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:47:12.355965   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:47:12.372721   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:47:12.373199   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:12.373246   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:12.387521   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I1202 11:47:12.387950   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:12.388471   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:12.388495   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:12.388817   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:12.389008   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:47:12.389136   23379 start.go:317] joinCluster: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:47:12.389250   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1202 11:47:12.389272   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:47:12.391559   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:12.391918   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:47:12.391947   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:47:12.392078   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:47:12.392244   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:47:12.392404   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:47:12.392523   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:47:12.542455   23379 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:47:12.542510   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 781q3h.dri7zuf7dlr9vool --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m02 --control-plane --apiserver-advertise-address=192.168.39.96 --apiserver-bind-port=8443"
	I1202 11:47:33.298276   23379 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 781q3h.dri7zuf7dlr9vool --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m02 --control-plane --apiserver-advertise-address=192.168.39.96 --apiserver-bind-port=8443": (20.75572497s)
	I1202 11:47:33.298324   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1202 11:47:33.868140   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-604935-m02 minikube.k8s.io/updated_at=2024_12_02T11_47_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=ha-604935 minikube.k8s.io/primary=false
	I1202 11:47:34.014505   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-604935-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1202 11:47:34.151913   23379 start.go:319] duration metric: took 21.762775302s to joinCluster
	I1202 11:47:34.151988   23379 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:47:34.152289   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:34.153405   23379 out.go:177] * Verifying Kubernetes components...
	I1202 11:47:34.154583   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:47:34.458218   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:47:34.537753   23379 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:47:34.537985   23379 kapi.go:59] client config for ha-604935: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 11:47:34.538049   23379 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1202 11:47:34.538237   23379 node_ready.go:35] waiting up to 6m0s for node "ha-604935-m02" to be "Ready" ...
	I1202 11:47:34.538328   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:34.538338   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:34.538353   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:34.538361   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:34.553164   23379 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1202 11:47:35.038636   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:35.038655   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:35.038663   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:35.038667   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:35.043410   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:35.539240   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:35.539268   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:35.539288   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:35.539295   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:35.543768   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:36.038477   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:36.038500   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:36.038510   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:36.038514   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:36.044852   23379 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1202 11:47:36.539264   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:36.539282   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:36.539291   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:36.539294   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:36.541884   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:36.542608   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:37.039323   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:37.039344   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:37.039355   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:37.039363   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:37.042762   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:37.539267   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:37.539288   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:37.539298   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:37.539302   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:37.542085   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:38.039187   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:38.039205   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:38.039213   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:38.039217   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:38.042510   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:38.538564   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:38.538590   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:38.538602   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:38.538607   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:38.543229   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:38.543842   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:39.039431   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:39.039454   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:39.039465   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:39.039470   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:39.043101   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:39.538521   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:39.538548   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:39.538559   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:39.538565   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:39.544151   23379 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:47:40.039125   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:40.039142   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:40.039150   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:40.039155   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:40.041928   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:40.539447   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:40.539466   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:40.539477   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:40.539482   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:40.542088   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:41.039165   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:41.039194   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:41.039206   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:41.039214   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:41.042019   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:41.042646   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:41.538430   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:41.538449   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:41.538456   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:41.538460   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:41.541300   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:42.038543   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:42.038564   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:42.038574   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:42.038579   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:42.042807   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:42.539123   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:42.539144   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:42.539155   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:42.539168   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:42.615775   23379 round_trippers.go:574] Response Status: 200 OK in 76 milliseconds
	I1202 11:47:43.038628   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:43.038651   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:43.038660   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:43.038670   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:43.041582   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:43.538519   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:43.538548   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:43.538559   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:43.538566   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:43.542876   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:43.543448   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:44.038473   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:44.038493   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:44.038501   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:44.038506   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:44.041916   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:44.538909   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:44.538934   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:44.538946   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:44.538954   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:44.542475   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:45.039019   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:45.039039   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:45.039046   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:45.039050   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:45.042662   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:45.539381   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:45.539404   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:45.539414   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:45.539419   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:45.543229   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:45.544177   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:46.038600   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:46.038622   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:46.038630   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:46.038635   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:46.041460   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:46.538597   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:46.538618   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:46.538628   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:46.538632   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:46.541444   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:47.038797   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:47.038817   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:47.038825   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:47.038828   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:47.041962   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:47.539440   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:47.539463   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:47.539470   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:47.539474   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:47.543115   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:48.039282   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:48.039306   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:48.039316   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:48.039320   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:48.042491   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:48.043162   23379 node_ready.go:53] node "ha-604935-m02" has status "Ready":"False"
	I1202 11:47:48.539348   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:48.539372   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:48.539382   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:48.539387   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:48.542583   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:49.038466   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:49.038485   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.038493   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.038498   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.041480   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.539130   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:49.539151   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.539162   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.539166   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.542870   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:49.543570   23379 node_ready.go:49] node "ha-604935-m02" has status "Ready":"True"
	I1202 11:47:49.543589   23379 node_ready.go:38] duration metric: took 15.005336835s for node "ha-604935-m02" to be "Ready" ...
	I1202 11:47:49.543598   23379 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:47:49.543686   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:49.543695   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.543702   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.543707   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.548022   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:49.557050   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.557145   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5gcc2
	I1202 11:47:49.557159   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.557169   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.557181   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.561541   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:49.562194   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:49.562212   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.562222   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.562229   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.564378   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.564821   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:49.564836   23379 pod_ready.go:82] duration metric: took 7.7579ms for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.564845   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.564897   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-g48q9
	I1202 11:47:49.564905   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.564912   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.564919   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.566980   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.567489   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:49.567501   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.567509   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.567514   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.569545   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.570321   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:49.570337   23379 pod_ready.go:82] duration metric: took 5.482367ms for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.570346   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.570395   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935
	I1202 11:47:49.570402   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.570408   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.570416   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.572224   23379 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:47:49.572830   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:49.572845   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.572852   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.572856   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.574847   23379 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:47:49.575387   23379 pod_ready.go:93] pod "etcd-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:49.575407   23379 pod_ready.go:82] duration metric: took 5.05521ms for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.575417   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:49.575471   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:49.575482   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.575492   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.575497   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.577559   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:49.578025   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:49.578036   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:49.578042   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:49.578046   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:49.580244   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:50.075930   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:50.075955   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.075967   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.075972   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.078932   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:50.079644   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:50.079660   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.079671   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.079679   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.083049   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:50.576373   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:50.576396   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.576404   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.576408   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.579581   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:50.580413   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:50.580428   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:50.580435   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:50.580439   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:50.582674   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.075671   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:47:51.075692   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.075700   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.075705   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.080547   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:51.081109   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:51.081140   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.081151   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.081159   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.083775   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.084570   23379 pod_ready.go:93] pod "etcd-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.084587   23379 pod_ready.go:82] duration metric: took 1.509162413s for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.084605   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.084654   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935
	I1202 11:47:51.084661   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.084668   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.084676   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.086997   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.139895   23379 request.go:632] Waited for 52.198749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.139936   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.139941   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.139948   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.139954   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.142459   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.143143   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.143164   23379 pod_ready.go:82] duration metric: took 58.549955ms for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.143176   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.339592   23379 request.go:632] Waited for 196.342057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:47:51.339640   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:47:51.339648   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.339657   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.339665   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.342939   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:51.539862   23379 request.go:632] Waited for 196.164588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:51.539931   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:51.539935   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.539943   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.539950   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.543209   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:51.543865   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.543882   23379 pod_ready.go:82] duration metric: took 400.698772ms for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.543892   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.739144   23379 request.go:632] Waited for 195.19473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:47:51.739219   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:47:51.739235   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.739245   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.739249   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.741900   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.940184   23379 request.go:632] Waited for 197.361013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.940269   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:51.940278   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:51.940285   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:51.940289   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:51.943128   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:51.943706   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:51.943727   23379 pod_ready.go:82] duration metric: took 399.828238ms for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:51.943741   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.139832   23379 request.go:632] Waited for 196.024828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:47:52.139897   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:47:52.139908   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.139915   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.139922   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.143273   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:52.339296   23379 request.go:632] Waited for 195.254025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:52.339366   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:52.339382   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.339392   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.339396   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.343086   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:52.343632   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:52.343651   23379 pod_ready.go:82] duration metric: took 399.901549ms for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.343664   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.540119   23379 request.go:632] Waited for 196.382954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:47:52.540208   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:47:52.540223   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.540246   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.540254   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.544789   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:52.739964   23379 request.go:632] Waited for 194.383281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:52.740029   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:52.740036   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.740047   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.740056   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.744675   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:52.745274   23379 pod_ready.go:93] pod "kube-proxy-tqcb6" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:52.745291   23379 pod_ready.go:82] duration metric: took 401.620034ms for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.745302   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:52.939398   23379 request.go:632] Waited for 194.014981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:47:52.939448   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:47:52.939453   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:52.939460   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:52.939466   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:52.942473   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:47:53.139562   23379 request.go:632] Waited for 196.368019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.139626   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.139631   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.139639   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.139642   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.142786   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.143361   23379 pod_ready.go:93] pod "kube-proxy-w9r4x" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:53.143382   23379 pod_ready.go:82] duration metric: took 398.068666ms for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.143391   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.339501   23379 request.go:632] Waited for 196.04496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:47:53.339586   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:47:53.339596   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.339607   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.339618   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.343080   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.540159   23379 request.go:632] Waited for 196.184742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:53.540226   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:47:53.540246   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.540255   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.540261   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.543534   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.544454   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:53.544479   23379 pod_ready.go:82] duration metric: took 401.077052ms for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.544494   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.739453   23379 request.go:632] Waited for 194.878612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:47:53.739540   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:47:53.739557   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.739572   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.739583   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.743318   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:53.939180   23379 request.go:632] Waited for 195.280753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.939245   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:47:53.939250   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.939258   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.939265   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.943381   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:53.944067   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:47:53.944085   23379 pod_ready.go:82] duration metric: took 399.577551ms for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:47:53.944099   23379 pod_ready.go:39] duration metric: took 4.40047197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:47:53.944119   23379 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:47:53.944173   23379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:47:53.960762   23379 api_server.go:72] duration metric: took 19.808744771s to wait for apiserver process to appear ...
	I1202 11:47:53.960781   23379 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:47:53.960802   23379 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1202 11:47:53.965634   23379 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1202 11:47:53.965695   23379 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1202 11:47:53.965706   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:53.965717   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:53.965727   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:53.966539   23379 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1202 11:47:53.966644   23379 api_server.go:141] control plane version: v1.31.2
	I1202 11:47:53.966664   23379 api_server.go:131] duration metric: took 5.87665ms to wait for apiserver health ...
	I1202 11:47:53.966674   23379 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:47:54.140116   23379 request.go:632] Waited for 173.370822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.140184   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.140192   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.140203   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.140213   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.144688   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:54.150151   23379 system_pods.go:59] 17 kube-system pods found
	I1202 11:47:54.150175   23379 system_pods.go:61] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:47:54.150180   23379 system_pods.go:61] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:47:54.150184   23379 system_pods.go:61] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:47:54.150187   23379 system_pods.go:61] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:47:54.150190   23379 system_pods.go:61] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:47:54.150193   23379 system_pods.go:61] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:47:54.150196   23379 system_pods.go:61] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:47:54.150200   23379 system_pods.go:61] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:47:54.150204   23379 system_pods.go:61] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:47:54.150208   23379 system_pods.go:61] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:47:54.150213   23379 system_pods.go:61] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:47:54.150216   23379 system_pods.go:61] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:47:54.150222   23379 system_pods.go:61] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:47:54.150225   23379 system_pods.go:61] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:47:54.150228   23379 system_pods.go:61] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:47:54.150230   23379 system_pods.go:61] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:47:54.150234   23379 system_pods.go:61] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:47:54.150239   23379 system_pods.go:74] duration metric: took 183.556674ms to wait for pod list to return data ...
	I1202 11:47:54.150248   23379 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:47:54.339686   23379 request.go:632] Waited for 189.36849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:47:54.339740   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:47:54.339744   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.339751   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.339755   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.343135   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:54.343361   23379 default_sa.go:45] found service account: "default"
	I1202 11:47:54.343386   23379 default_sa.go:55] duration metric: took 193.131705ms for default service account to be created ...
	I1202 11:47:54.343397   23379 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:47:54.539835   23379 request.go:632] Waited for 196.371965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.539931   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:47:54.539943   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.539954   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.539964   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.544943   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:47:54.550739   23379 system_pods.go:86] 17 kube-system pods found
	I1202 11:47:54.550763   23379 system_pods.go:89] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:47:54.550769   23379 system_pods.go:89] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:47:54.550775   23379 system_pods.go:89] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:47:54.550778   23379 system_pods.go:89] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:47:54.550809   23379 system_pods.go:89] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:47:54.550819   23379 system_pods.go:89] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:47:54.550824   23379 system_pods.go:89] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:47:54.550829   23379 system_pods.go:89] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:47:54.550833   23379 system_pods.go:89] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:47:54.550837   23379 system_pods.go:89] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:47:54.550841   23379 system_pods.go:89] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:47:54.550848   23379 system_pods.go:89] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:47:54.550852   23379 system_pods.go:89] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:47:54.550857   23379 system_pods.go:89] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:47:54.550862   23379 system_pods.go:89] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:47:54.550867   23379 system_pods.go:89] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:47:54.550870   23379 system_pods.go:89] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:47:54.550878   23379 system_pods.go:126] duration metric: took 207.476252ms to wait for k8s-apps to be running ...
	I1202 11:47:54.550887   23379 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:47:54.550927   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:47:54.567143   23379 system_svc.go:56] duration metric: took 16.250371ms WaitForService to wait for kubelet
	I1202 11:47:54.567163   23379 kubeadm.go:582] duration metric: took 20.415147049s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:47:54.567180   23379 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:47:54.739589   23379 request.go:632] Waited for 172.338353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1202 11:47:54.739668   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1202 11:47:54.739675   23379 round_trippers.go:469] Request Headers:
	I1202 11:47:54.739683   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:47:54.739688   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:47:54.743346   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:47:54.744125   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:47:54.744152   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:47:54.744165   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:47:54.744170   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:47:54.744177   23379 node_conditions.go:105] duration metric: took 176.990456ms to run NodePressure ...
	I1202 11:47:54.744190   23379 start.go:241] waiting for startup goroutines ...
	I1202 11:47:54.744223   23379 start.go:255] writing updated cluster config ...
	I1202 11:47:54.746253   23379 out.go:201] 
	I1202 11:47:54.747593   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:47:54.747718   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:47:54.749358   23379 out.go:177] * Starting "ha-604935-m03" control-plane node in "ha-604935" cluster
	I1202 11:47:54.750410   23379 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:47:54.750433   23379 cache.go:56] Caching tarball of preloaded images
	I1202 11:47:54.750533   23379 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:47:54.750548   23379 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:47:54.750643   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:47:54.750878   23379 start.go:360] acquireMachinesLock for ha-604935-m03: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:47:54.750923   23379 start.go:364] duration metric: took 26.206µs to acquireMachinesLock for "ha-604935-m03"
	I1202 11:47:54.750944   23379 start.go:93] Provisioning new machine with config: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:47:54.751067   23379 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1202 11:47:54.752864   23379 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 11:47:54.752946   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:47:54.752986   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:47:54.767584   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I1202 11:47:54.767916   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:47:54.768481   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:47:54.768505   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:47:54.768819   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:47:54.768991   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:47:54.769125   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:47:54.769335   23379 start.go:159] libmachine.API.Create for "ha-604935" (driver="kvm2")
	I1202 11:47:54.769376   23379 client.go:168] LocalClient.Create starting
	I1202 11:47:54.769409   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 11:47:54.769445   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:47:54.769469   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:47:54.769535   23379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 11:47:54.769563   23379 main.go:141] libmachine: Decoding PEM data...
	I1202 11:47:54.769581   23379 main.go:141] libmachine: Parsing certificate...
	I1202 11:47:54.769610   23379 main.go:141] libmachine: Running pre-create checks...
	I1202 11:47:54.769622   23379 main.go:141] libmachine: (ha-604935-m03) Calling .PreCreateCheck
	I1202 11:47:54.769820   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetConfigRaw
	I1202 11:47:54.770184   23379 main.go:141] libmachine: Creating machine...
	I1202 11:47:54.770198   23379 main.go:141] libmachine: (ha-604935-m03) Calling .Create
	I1202 11:47:54.770317   23379 main.go:141] libmachine: (ha-604935-m03) Creating KVM machine...
	I1202 11:47:54.771476   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found existing default KVM network
	I1202 11:47:54.771588   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found existing private KVM network mk-ha-604935
	I1202 11:47:54.771715   23379 main.go:141] libmachine: (ha-604935-m03) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03 ...
	I1202 11:47:54.771731   23379 main.go:141] libmachine: (ha-604935-m03) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:47:54.771824   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:54.771717   24139 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:47:54.771925   23379 main.go:141] libmachine: (ha-604935-m03) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 11:47:55.025734   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:55.025618   24139 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa...
	I1202 11:47:55.125359   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:55.125265   24139 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/ha-604935-m03.rawdisk...
	I1202 11:47:55.125386   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Writing magic tar header
	I1202 11:47:55.125397   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Writing SSH key tar header
	I1202 11:47:55.125407   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:55.125384   24139 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03 ...
	I1202 11:47:55.125541   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03
	I1202 11:47:55.125572   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 11:47:55.125586   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03 (perms=drwx------)
	I1202 11:47:55.125605   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 11:47:55.125622   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 11:47:55.125634   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 11:47:55.125649   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 11:47:55.125663   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:47:55.125683   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 11:47:55.125697   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 11:47:55.125710   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home/jenkins
	I1202 11:47:55.125719   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Checking permissions on dir: /home
	I1202 11:47:55.125733   23379 main.go:141] libmachine: (ha-604935-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 11:47:55.125745   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Skipping /home - not owner
	I1202 11:47:55.125754   23379 main.go:141] libmachine: (ha-604935-m03) Creating domain...
	I1202 11:47:55.126629   23379 main.go:141] libmachine: (ha-604935-m03) define libvirt domain using xml: 
	I1202 11:47:55.126649   23379 main.go:141] libmachine: (ha-604935-m03) <domain type='kvm'>
	I1202 11:47:55.126659   23379 main.go:141] libmachine: (ha-604935-m03)   <name>ha-604935-m03</name>
	I1202 11:47:55.126667   23379 main.go:141] libmachine: (ha-604935-m03)   <memory unit='MiB'>2200</memory>
	I1202 11:47:55.126675   23379 main.go:141] libmachine: (ha-604935-m03)   <vcpu>2</vcpu>
	I1202 11:47:55.126685   23379 main.go:141] libmachine: (ha-604935-m03)   <features>
	I1202 11:47:55.126693   23379 main.go:141] libmachine: (ha-604935-m03)     <acpi/>
	I1202 11:47:55.126701   23379 main.go:141] libmachine: (ha-604935-m03)     <apic/>
	I1202 11:47:55.126706   23379 main.go:141] libmachine: (ha-604935-m03)     <pae/>
	I1202 11:47:55.126709   23379 main.go:141] libmachine: (ha-604935-m03)     
	I1202 11:47:55.126714   23379 main.go:141] libmachine: (ha-604935-m03)   </features>
	I1202 11:47:55.126721   23379 main.go:141] libmachine: (ha-604935-m03)   <cpu mode='host-passthrough'>
	I1202 11:47:55.126745   23379 main.go:141] libmachine: (ha-604935-m03)   
	I1202 11:47:55.126763   23379 main.go:141] libmachine: (ha-604935-m03)   </cpu>
	I1202 11:47:55.126773   23379 main.go:141] libmachine: (ha-604935-m03)   <os>
	I1202 11:47:55.126780   23379 main.go:141] libmachine: (ha-604935-m03)     <type>hvm</type>
	I1202 11:47:55.126791   23379 main.go:141] libmachine: (ha-604935-m03)     <boot dev='cdrom'/>
	I1202 11:47:55.126796   23379 main.go:141] libmachine: (ha-604935-m03)     <boot dev='hd'/>
	I1202 11:47:55.126808   23379 main.go:141] libmachine: (ha-604935-m03)     <bootmenu enable='no'/>
	I1202 11:47:55.126817   23379 main.go:141] libmachine: (ha-604935-m03)   </os>
	I1202 11:47:55.126827   23379 main.go:141] libmachine: (ha-604935-m03)   <devices>
	I1202 11:47:55.126837   23379 main.go:141] libmachine: (ha-604935-m03)     <disk type='file' device='cdrom'>
	I1202 11:47:55.126849   23379 main.go:141] libmachine: (ha-604935-m03)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/boot2docker.iso'/>
	I1202 11:47:55.126860   23379 main.go:141] libmachine: (ha-604935-m03)       <target dev='hdc' bus='scsi'/>
	I1202 11:47:55.126869   23379 main.go:141] libmachine: (ha-604935-m03)       <readonly/>
	I1202 11:47:55.126878   23379 main.go:141] libmachine: (ha-604935-m03)     </disk>
	I1202 11:47:55.126888   23379 main.go:141] libmachine: (ha-604935-m03)     <disk type='file' device='disk'>
	I1202 11:47:55.126904   23379 main.go:141] libmachine: (ha-604935-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 11:47:55.126929   23379 main.go:141] libmachine: (ha-604935-m03)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/ha-604935-m03.rawdisk'/>
	I1202 11:47:55.126949   23379 main.go:141] libmachine: (ha-604935-m03)       <target dev='hda' bus='virtio'/>
	I1202 11:47:55.126958   23379 main.go:141] libmachine: (ha-604935-m03)     </disk>
	I1202 11:47:55.126972   23379 main.go:141] libmachine: (ha-604935-m03)     <interface type='network'>
	I1202 11:47:55.126984   23379 main.go:141] libmachine: (ha-604935-m03)       <source network='mk-ha-604935'/>
	I1202 11:47:55.126990   23379 main.go:141] libmachine: (ha-604935-m03)       <model type='virtio'/>
	I1202 11:47:55.127001   23379 main.go:141] libmachine: (ha-604935-m03)     </interface>
	I1202 11:47:55.127011   23379 main.go:141] libmachine: (ha-604935-m03)     <interface type='network'>
	I1202 11:47:55.127022   23379 main.go:141] libmachine: (ha-604935-m03)       <source network='default'/>
	I1202 11:47:55.127039   23379 main.go:141] libmachine: (ha-604935-m03)       <model type='virtio'/>
	I1202 11:47:55.127046   23379 main.go:141] libmachine: (ha-604935-m03)     </interface>
	I1202 11:47:55.127054   23379 main.go:141] libmachine: (ha-604935-m03)     <serial type='pty'>
	I1202 11:47:55.127059   23379 main.go:141] libmachine: (ha-604935-m03)       <target port='0'/>
	I1202 11:47:55.127065   23379 main.go:141] libmachine: (ha-604935-m03)     </serial>
	I1202 11:47:55.127070   23379 main.go:141] libmachine: (ha-604935-m03)     <console type='pty'>
	I1202 11:47:55.127080   23379 main.go:141] libmachine: (ha-604935-m03)       <target type='serial' port='0'/>
	I1202 11:47:55.127089   23379 main.go:141] libmachine: (ha-604935-m03)     </console>
	I1202 11:47:55.127100   23379 main.go:141] libmachine: (ha-604935-m03)     <rng model='virtio'>
	I1202 11:47:55.127112   23379 main.go:141] libmachine: (ha-604935-m03)       <backend model='random'>/dev/random</backend>
	I1202 11:47:55.127125   23379 main.go:141] libmachine: (ha-604935-m03)     </rng>
	I1202 11:47:55.127130   23379 main.go:141] libmachine: (ha-604935-m03)     
	I1202 11:47:55.127136   23379 main.go:141] libmachine: (ha-604935-m03)     
	I1202 11:47:55.127141   23379 main.go:141] libmachine: (ha-604935-m03)   </devices>
	I1202 11:47:55.127147   23379 main.go:141] libmachine: (ha-604935-m03) </domain>
	I1202 11:47:55.127154   23379 main.go:141] libmachine: (ha-604935-m03) 
	I1202 11:47:55.134362   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:04:31:c3 in network default
	I1202 11:47:55.134940   23379 main.go:141] libmachine: (ha-604935-m03) Ensuring networks are active...
	I1202 11:47:55.134970   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:55.135700   23379 main.go:141] libmachine: (ha-604935-m03) Ensuring network default is active
	I1202 11:47:55.135994   23379 main.go:141] libmachine: (ha-604935-m03) Ensuring network mk-ha-604935 is active
	I1202 11:47:55.136395   23379 main.go:141] libmachine: (ha-604935-m03) Getting domain xml...
	I1202 11:47:55.137154   23379 main.go:141] libmachine: (ha-604935-m03) Creating domain...
	I1202 11:47:56.327343   23379 main.go:141] libmachine: (ha-604935-m03) Waiting to get IP...
	I1202 11:47:56.328051   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:56.328532   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:56.328560   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:56.328490   24139 retry.go:31] will retry after 245.534512ms: waiting for machine to come up
	I1202 11:47:56.575853   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:56.576344   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:56.576361   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:56.576322   24139 retry.go:31] will retry after 318.961959ms: waiting for machine to come up
	I1202 11:47:56.897058   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:56.897590   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:56.897617   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:56.897539   24139 retry.go:31] will retry after 408.54179ms: waiting for machine to come up
	I1202 11:47:57.308040   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:57.308434   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:57.308462   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:57.308386   24139 retry.go:31] will retry after 402.803745ms: waiting for machine to come up
	I1202 11:47:57.713046   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:57.713543   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:57.713570   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:57.713486   24139 retry.go:31] will retry after 579.226055ms: waiting for machine to come up
	I1202 11:47:58.294078   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:58.294470   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:58.294499   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:58.294431   24139 retry.go:31] will retry after 896.930274ms: waiting for machine to come up
	I1202 11:47:59.192283   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:47:59.192647   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:47:59.192676   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:47:59.192594   24139 retry.go:31] will retry after 885.008169ms: waiting for machine to come up
	I1202 11:48:00.078944   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:00.079402   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:00.079429   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:00.079369   24139 retry.go:31] will retry after 1.252859053s: waiting for machine to come up
	I1202 11:48:01.333237   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:01.333651   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:01.333686   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:01.333595   24139 retry.go:31] will retry after 1.614324315s: waiting for machine to come up
	I1202 11:48:02.949128   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:02.949536   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:02.949565   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:02.949508   24139 retry.go:31] will retry after 1.812710836s: waiting for machine to come up
	I1202 11:48:04.763946   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:04.764375   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:04.764406   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:04.764323   24139 retry.go:31] will retry after 2.067204627s: waiting for machine to come up
	I1202 11:48:06.833288   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:06.833665   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:06.833688   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:06.833637   24139 retry.go:31] will retry after 2.307525128s: waiting for machine to come up
	I1202 11:48:09.144169   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:09.144572   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:09.144593   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:09.144528   24139 retry.go:31] will retry after 3.498536479s: waiting for machine to come up
	I1202 11:48:12.646257   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:12.646634   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find current IP address of domain ha-604935-m03 in network mk-ha-604935
	I1202 11:48:12.646662   23379 main.go:141] libmachine: (ha-604935-m03) DBG | I1202 11:48:12.646585   24139 retry.go:31] will retry after 4.180840958s: waiting for machine to come up
	I1202 11:48:16.830266   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.830741   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has current primary IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.830768   23379 main.go:141] libmachine: (ha-604935-m03) Found IP for machine: 192.168.39.211
	I1202 11:48:16.830807   23379 main.go:141] libmachine: (ha-604935-m03) Reserving static IP address...
	I1202 11:48:16.831141   23379 main.go:141] libmachine: (ha-604935-m03) DBG | unable to find host DHCP lease matching {name: "ha-604935-m03", mac: "52:54:00:56:c4:59", ip: "192.168.39.211"} in network mk-ha-604935
	I1202 11:48:16.902131   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Getting to WaitForSSH function...
	I1202 11:48:16.902164   23379 main.go:141] libmachine: (ha-604935-m03) Reserved static IP address: 192.168.39.211
	I1202 11:48:16.902173   23379 main.go:141] libmachine: (ha-604935-m03) Waiting for SSH to be available...
	I1202 11:48:16.905075   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.905526   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:16.905551   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:16.905741   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Using SSH client type: external
	I1202 11:48:16.905772   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa (-rw-------)
	I1202 11:48:16.905800   23379 main.go:141] libmachine: (ha-604935-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 11:48:16.905820   23379 main.go:141] libmachine: (ha-604935-m03) DBG | About to run SSH command:
	I1202 11:48:16.905851   23379 main.go:141] libmachine: (ha-604935-m03) DBG | exit 0
	I1202 11:48:17.032533   23379 main.go:141] libmachine: (ha-604935-m03) DBG | SSH cmd err, output: <nil>: 
	I1202 11:48:17.032776   23379 main.go:141] libmachine: (ha-604935-m03) KVM machine creation complete!
	I1202 11:48:17.033131   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetConfigRaw
	I1202 11:48:17.033671   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:17.033865   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:17.034018   23379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 11:48:17.034033   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetState
	I1202 11:48:17.035293   23379 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 11:48:17.035305   23379 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 11:48:17.035310   23379 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 11:48:17.035315   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.037352   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.037741   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.037774   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.037900   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.038083   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.038238   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.038381   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.038530   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.038713   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.038724   23379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 11:48:17.143327   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:48:17.143352   23379 main.go:141] libmachine: Detecting the provisioner...
	I1202 11:48:17.143372   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.146175   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.146516   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.146548   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.146646   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.146838   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.146983   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.147108   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.147258   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.147425   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.147438   23379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 11:48:17.253131   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 11:48:17.253218   23379 main.go:141] libmachine: found compatible host: buildroot
	I1202 11:48:17.253233   23379 main.go:141] libmachine: Provisioning with buildroot...
	I1202 11:48:17.253245   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:48:17.253510   23379 buildroot.go:166] provisioning hostname "ha-604935-m03"
	I1202 11:48:17.253537   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:48:17.253707   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.256428   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.256774   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.256796   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.256946   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.257116   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.257249   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.257377   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.257504   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.257691   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.257703   23379 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935-m03 && echo "ha-604935-m03" | sudo tee /etc/hostname
	I1202 11:48:17.375185   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935-m03
	
	I1202 11:48:17.375210   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.377667   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.378038   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.378062   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.378264   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.378483   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.378634   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.378780   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.378929   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.379106   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.379136   23379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:48:17.496248   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:48:17.496279   23379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:48:17.496297   23379 buildroot.go:174] setting up certificates
	I1202 11:48:17.496309   23379 provision.go:84] configureAuth start
	I1202 11:48:17.496322   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetMachineName
	I1202 11:48:17.496560   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:17.499486   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.499912   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.499947   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.500094   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.502337   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.502712   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.502737   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.502856   23379 provision.go:143] copyHostCerts
	I1202 11:48:17.502886   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:48:17.502931   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:48:17.502944   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:48:17.503023   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:48:17.503097   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:48:17.503116   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:48:17.503123   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:48:17.503148   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:48:17.503191   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:48:17.503207   23379 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:48:17.503214   23379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:48:17.503234   23379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:48:17.503299   23379 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935-m03 san=[127.0.0.1 192.168.39.211 ha-604935-m03 localhost minikube]
	I1202 11:48:17.587852   23379 provision.go:177] copyRemoteCerts
	I1202 11:48:17.587906   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:48:17.587927   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.590598   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.590995   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.591015   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.591197   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.591367   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.591543   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.591679   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:17.674221   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:48:17.674296   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:48:17.698597   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:48:17.698660   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:48:17.723039   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:48:17.723097   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:48:17.747396   23379 provision.go:87] duration metric: took 251.076751ms to configureAuth
	I1202 11:48:17.747416   23379 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:48:17.747635   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:48:17.747715   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.750670   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.751052   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.751081   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.751262   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.751452   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.751599   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.751748   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.751905   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:17.752098   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:17.752117   23379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:48:17.976945   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:48:17.976975   23379 main.go:141] libmachine: Checking connection to Docker...
	I1202 11:48:17.976987   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetURL
	I1202 11:48:17.978227   23379 main.go:141] libmachine: (ha-604935-m03) DBG | Using libvirt version 6000000
	I1202 11:48:17.980581   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.980959   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.980987   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.981117   23379 main.go:141] libmachine: Docker is up and running!
	I1202 11:48:17.981135   23379 main.go:141] libmachine: Reticulating splines...
	I1202 11:48:17.981143   23379 client.go:171] duration metric: took 23.211756514s to LocalClient.Create
	I1202 11:48:17.981168   23379 start.go:167] duration metric: took 23.211833697s to libmachine.API.Create "ha-604935"
	I1202 11:48:17.981181   23379 start.go:293] postStartSetup for "ha-604935-m03" (driver="kvm2")
	I1202 11:48:17.981196   23379 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:48:17.981223   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:17.981429   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:48:17.981453   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:17.983470   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.983816   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:17.983841   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:17.983966   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:17.984144   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:17.984312   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:17.984449   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:18.067334   23379 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:48:18.072037   23379 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:48:18.072060   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:48:18.072140   23379 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:48:18.072226   23379 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:48:18.072251   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:48:18.072352   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:48:18.083182   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:48:18.110045   23379 start.go:296] duration metric: took 128.848906ms for postStartSetup
	I1202 11:48:18.110090   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetConfigRaw
	I1202 11:48:18.110693   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:18.113273   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.113636   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.113656   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.113891   23379 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:48:18.114175   23379 start.go:128] duration metric: took 23.363096022s to createHost
	I1202 11:48:18.114201   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:18.116660   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.116982   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.117010   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.117166   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:18.117378   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.117545   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.117689   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:18.117845   23379 main.go:141] libmachine: Using SSH client type: native
	I1202 11:48:18.118040   23379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1202 11:48:18.118051   23379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:48:18.225174   23379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733140098.198364061
	
	I1202 11:48:18.225197   23379 fix.go:216] guest clock: 1733140098.198364061
	I1202 11:48:18.225206   23379 fix.go:229] Guest: 2024-12-02 11:48:18.198364061 +0000 UTC Remote: 2024-12-02 11:48:18.114189112 +0000 UTC m=+146.672947053 (delta=84.174949ms)
	I1202 11:48:18.225226   23379 fix.go:200] guest clock delta is within tolerance: 84.174949ms
	I1202 11:48:18.225232   23379 start.go:83] releasing machines lock for "ha-604935-m03", held for 23.474299783s
	I1202 11:48:18.225255   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.225523   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:18.228223   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.228665   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.228698   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.231057   23379 out.go:177] * Found network options:
	I1202 11:48:18.232381   23379 out.go:177]   - NO_PROXY=192.168.39.102,192.168.39.96
	W1202 11:48:18.233581   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	W1202 11:48:18.233602   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:48:18.233614   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.234079   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.234244   23379 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:48:18.234317   23379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:48:18.234369   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	W1202 11:48:18.234421   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	W1202 11:48:18.234435   23379 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:48:18.234477   23379 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:48:18.234492   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:48:18.237268   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.237547   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.237709   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.237734   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.237883   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:18.237989   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:18.238016   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:18.238057   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.238152   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:48:18.238220   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:18.238300   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:48:18.238378   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:18.238455   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:48:18.238579   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:48:18.473317   23379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:48:18.479920   23379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:48:18.479984   23379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:48:18.496983   23379 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 11:48:18.497001   23379 start.go:495] detecting cgroup driver to use...
	I1202 11:48:18.497065   23379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:48:18.513241   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:48:18.527410   23379 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:48:18.527466   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:48:18.541725   23379 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:48:18.557008   23379 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:48:18.688718   23379 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:48:18.852643   23379 docker.go:233] disabling docker service ...
	I1202 11:48:18.852707   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:48:18.868163   23379 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:48:18.881925   23379 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:48:19.017240   23379 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:48:19.151423   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:48:19.165081   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:48:19.183322   23379 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:48:19.183382   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.193996   23379 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:48:19.194053   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.204159   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.214125   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.224009   23379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:48:19.234581   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.244825   23379 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.261368   23379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:48:19.270942   23379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:48:19.279793   23379 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:48:19.279828   23379 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:48:19.292711   23379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:48:19.302043   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:48:19.426581   23379 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:48:19.517813   23379 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:48:19.517869   23379 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:48:19.523046   23379 start.go:563] Will wait 60s for crictl version
	I1202 11:48:19.523100   23379 ssh_runner.go:195] Run: which crictl
	I1202 11:48:19.526693   23379 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:48:19.569077   23379 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:48:19.569154   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:48:19.606184   23379 ssh_runner.go:195] Run: crio --version
	I1202 11:48:19.639221   23379 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:48:19.640557   23379 out.go:177]   - env NO_PROXY=192.168.39.102
	I1202 11:48:19.641750   23379 out.go:177]   - env NO_PROXY=192.168.39.102,192.168.39.96
	I1202 11:48:19.642878   23379 main.go:141] libmachine: (ha-604935-m03) Calling .GetIP
	I1202 11:48:19.645504   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:19.645963   23379 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:48:19.645990   23379 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:48:19.646180   23379 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:48:19.650508   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:48:19.664882   23379 mustload.go:65] Loading cluster: ha-604935
	I1202 11:48:19.665139   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:48:19.665497   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:48:19.665538   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:48:19.680437   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1202 11:48:19.680830   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:48:19.681262   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:48:19.681286   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:48:19.681575   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:48:19.681746   23379 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:48:19.683191   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:48:19.683564   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:48:19.683606   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:48:19.697831   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I1202 11:48:19.698152   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:48:19.698542   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:48:19.698559   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:48:19.698845   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:48:19.699001   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:48:19.699166   23379 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.211
	I1202 11:48:19.699179   23379 certs.go:194] generating shared ca certs ...
	I1202 11:48:19.699197   23379 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:48:19.699318   23379 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:48:19.699355   23379 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:48:19.699364   23379 certs.go:256] generating profile certs ...
	I1202 11:48:19.699432   23379 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:48:19.699455   23379 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864
	I1202 11:48:19.699468   23379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.96 192.168.39.211 192.168.39.254]
	I1202 11:48:19.775540   23379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864 ...
	I1202 11:48:19.775561   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864: {Name:mk862a073739ee2a78cf9f81a3258f4be6a2f692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:48:19.775718   23379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864 ...
	I1202 11:48:19.775732   23379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864: {Name:mk2b946b8deaf42e144aacb0aeac107c1e5e5346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:48:19.775826   23379 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.6315c864 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:48:19.775947   23379 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.6315c864 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:48:19.776063   23379 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:48:19.776077   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:48:19.776089   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:48:19.776102   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:48:19.776114   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:48:19.776131   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:48:19.776145   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:48:19.776157   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:48:19.800328   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:48:19.800402   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:48:19.800434   23379 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:48:19.800443   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:48:19.800467   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:48:19.800488   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:48:19.800508   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:48:19.800550   23379 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:48:19.800576   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:19.800589   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:48:19.800601   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:48:19.800629   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:48:19.803275   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:19.803700   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:48:19.803723   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:19.803908   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:48:19.804099   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:48:19.804214   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:48:19.804377   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:48:19.880485   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 11:48:19.886022   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 11:48:19.898728   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 11:48:19.903305   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 11:48:19.914871   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 11:48:19.919141   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 11:48:19.929566   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 11:48:19.933478   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1202 11:48:19.943613   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 11:48:19.948089   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 11:48:19.958895   23379 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 11:48:19.964303   23379 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 11:48:19.977617   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:48:20.002994   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:48:20.029806   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:48:20.053441   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:48:20.076846   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1202 11:48:20.100859   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 11:48:20.123816   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:48:20.147882   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:48:20.170789   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:48:20.194677   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:48:20.217677   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:48:20.242059   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 11:48:20.259613   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 11:48:20.277187   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 11:48:20.294496   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1202 11:48:20.311183   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 11:48:20.328629   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 11:48:20.347609   23379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 11:48:20.365780   23379 ssh_runner.go:195] Run: openssl version
	I1202 11:48:20.371782   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:48:20.383879   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:20.388524   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:20.388568   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:48:20.394674   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:48:20.407273   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:48:20.419450   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:48:20.424025   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:48:20.424067   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:48:20.429730   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:48:20.440110   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:48:20.451047   23379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:48:20.456468   23379 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:48:20.456512   23379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:48:20.462924   23379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:48:20.474358   23379 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:48:20.478447   23379 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:48:20.478499   23379 kubeadm.go:934] updating node {m03 192.168.39.211 8443 v1.31.2 crio true true} ...
	I1202 11:48:20.478603   23379 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:48:20.478639   23379 kube-vip.go:115] generating kube-vip config ...
	I1202 11:48:20.478678   23379 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:48:20.496205   23379 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:48:20.496274   23379 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:48:20.496312   23379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:48:20.507618   23379 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1202 11:48:20.507658   23379 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1202 11:48:20.517119   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1202 11:48:20.517130   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1202 11:48:20.517161   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:48:20.517164   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:48:20.517126   23379 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1202 11:48:20.517219   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1202 11:48:20.517234   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:48:20.517303   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1202 11:48:20.534132   23379 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:48:20.534202   23379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1202 11:48:20.534220   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1202 11:48:20.534247   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1202 11:48:20.534296   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1202 11:48:20.534330   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1202 11:48:20.553870   23379 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1202 11:48:20.553896   23379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1202 11:48:21.369626   23379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 11:48:21.380201   23379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1202 11:48:21.397686   23379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:48:21.414134   23379 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1202 11:48:21.430962   23379 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:48:21.434795   23379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:48:21.446707   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:48:21.575648   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:48:21.592190   23379 host.go:66] Checking if "ha-604935" exists ...
	I1202 11:48:21.592653   23379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:48:21.592702   23379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:48:21.607602   23379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I1202 11:48:21.608034   23379 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:48:21.608505   23379 main.go:141] libmachine: Using API Version  1
	I1202 11:48:21.608523   23379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:48:21.608871   23379 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:48:21.609064   23379 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:48:21.609215   23379 start.go:317] joinCluster: &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:48:21.609330   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1202 11:48:21.609352   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:48:21.612246   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:21.612678   23379 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:48:21.612705   23379 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:48:21.612919   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:48:21.613101   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:48:21.613260   23379 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:48:21.613431   23379 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:48:21.802258   23379 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:48:21.802311   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oi1g5f.7vg9nzzhmrri7fzl --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443"
	I1202 11:48:44.058534   23379 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token oi1g5f.7vg9nzzhmrri7fzl --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-604935-m03 --control-plane --apiserver-advertise-address=192.168.39.211 --apiserver-bind-port=8443": (22.25619815s)
	I1202 11:48:44.058574   23379 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1202 11:48:44.589392   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-604935-m03 minikube.k8s.io/updated_at=2024_12_02T11_48_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=ha-604935 minikube.k8s.io/primary=false
	I1202 11:48:44.754182   23379 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-604935-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1202 11:48:44.876509   23379 start.go:319] duration metric: took 23.267291972s to joinCluster
	I1202 11:48:44.876583   23379 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:48:44.876929   23379 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:48:44.877896   23379 out.go:177] * Verifying Kubernetes components...
	I1202 11:48:44.879178   23379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:48:45.205771   23379 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:48:45.227079   23379 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:48:45.227379   23379 kapi.go:59] client config for ha-604935: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 11:48:45.227437   23379 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.102:8443
	I1202 11:48:45.227646   23379 node_ready.go:35] waiting up to 6m0s for node "ha-604935-m03" to be "Ready" ...
	I1202 11:48:45.227731   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:45.227739   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:45.227750   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:45.227760   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:45.230602   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:45.728816   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:45.728844   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:45.728856   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:45.728862   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:45.732325   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:46.228808   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:46.228838   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:46.228847   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:46.228855   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:46.232971   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:48:46.728246   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:46.728266   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:46.728275   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:46.728278   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:46.731578   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:47.228275   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:47.228293   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:47.228302   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:47.228305   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:47.231235   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:47.231687   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:47.728543   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:47.728564   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:47.728575   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:47.728580   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:47.731725   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:48.228100   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:48.228126   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:48.228134   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:48.228139   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:48.231200   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:48.727927   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:48.727953   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:48.727965   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:48.727971   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:48.731841   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:49.228251   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:49.228277   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:49.228288   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:49.228295   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:49.231887   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:49.232816   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:49.728539   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:49.728558   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:49.728567   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:49.728578   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:49.731618   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:50.228164   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:50.228182   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:50.228190   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:50.228194   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:50.231677   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:50.728841   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:50.728865   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:50.728877   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:50.728884   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:50.731790   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:51.227844   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:51.227875   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:51.227882   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:51.227886   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:51.231092   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:51.728369   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:51.728389   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:51.728397   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:51.728402   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:51.731512   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:51.732161   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:52.228555   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:52.228577   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:52.228585   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:52.228590   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:52.232624   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:48:52.727915   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:52.727935   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:52.727942   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:52.727946   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:52.731213   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:53.228361   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:53.228382   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:53.228389   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:53.228392   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:53.233382   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:48:53.728248   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:53.728268   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:53.728276   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:53.728280   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:53.731032   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:54.228383   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:54.228402   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:54.228409   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:54.228414   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:54.231567   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:54.232182   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:54.728033   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:54.728054   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:54.728070   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:54.728078   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:54.731003   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:55.227931   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:55.227952   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:55.227959   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:55.227963   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:55.231124   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:55.728257   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:55.728282   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:55.728295   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:55.728302   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:55.731469   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:56.228616   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:56.228634   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:56.228642   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:56.228648   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:56.231749   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:56.232413   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:56.728627   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:56.728662   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:56.728672   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:56.728679   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:56.731199   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:57.228073   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:57.228095   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:57.228106   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:57.228112   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:57.231071   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:57.728355   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:57.728374   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:57.728386   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:57.728390   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:57.732053   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:58.228692   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:58.228716   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:58.228725   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:58.228731   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:58.231871   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:58.232534   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:48:58.727842   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:58.727867   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:58.727888   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:58.727893   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:58.730412   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:48:59.228495   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:59.228515   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:59.228522   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:59.228525   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:59.232497   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:48:59.728247   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:48:59.728264   23379 round_trippers.go:469] Request Headers:
	I1202 11:48:59.728272   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:48:59.728275   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:48:59.731212   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.227900   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:00.227922   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.227929   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.227932   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.232057   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:00.233141   23379 node_ready.go:53] node "ha-604935-m03" has status "Ready":"False"
	I1202 11:49:00.728080   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:00.728104   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.728116   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.728123   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.730928   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.731736   23379 node_ready.go:49] node "ha-604935-m03" has status "Ready":"True"
	I1202 11:49:00.731754   23379 node_ready.go:38] duration metric: took 15.50409308s for node "ha-604935-m03" to be "Ready" ...
	I1202 11:49:00.731762   23379 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:49:00.731812   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:00.731821   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.731828   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.731833   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.737119   23379 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:49:00.743811   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.743881   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5gcc2
	I1202 11:49:00.743889   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.743896   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.743900   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.746447   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.747270   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:00.747288   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.747298   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.747304   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.750173   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.750663   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.750685   23379 pod_ready.go:82] duration metric: took 6.851528ms for pod "coredns-7c65d6cfc9-5gcc2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.750697   23379 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.750762   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-g48q9
	I1202 11:49:00.750773   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.750782   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.750787   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.753393   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.754225   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:00.754242   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.754253   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.754261   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.756959   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.757348   23379 pod_ready.go:93] pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.757363   23379 pod_ready.go:82] duration metric: took 6.658502ms for pod "coredns-7c65d6cfc9-g48q9" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.757372   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.757427   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935
	I1202 11:49:00.757438   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.757444   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.757449   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.759919   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.760524   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:00.760540   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.760551   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.760557   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.762639   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.763103   23379 pod_ready.go:93] pod "etcd-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.763117   23379 pod_ready.go:82] duration metric: took 5.738836ms for pod "etcd-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.763130   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.763170   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m02
	I1202 11:49:00.763178   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.763184   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.763187   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.765295   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:00.765840   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:00.765853   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.765859   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.765866   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.767856   23379 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:49:00.768294   23379 pod_ready.go:93] pod "etcd-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:00.768308   23379 pod_ready.go:82] duration metric: took 5.173078ms for pod "etcd-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.768315   23379 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:00.928568   23379 request.go:632] Waited for 160.204775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m03
	I1202 11:49:00.928622   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/etcd-ha-604935-m03
	I1202 11:49:00.928630   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:00.928637   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:00.928644   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:00.931639   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:01.129121   23379 request.go:632] Waited for 196.362858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:01.129188   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:01.129194   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.129201   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.129206   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.132093   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:01.132639   23379 pod_ready.go:93] pod "etcd-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:01.132663   23379 pod_ready.go:82] duration metric: took 364.340751ms for pod "etcd-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.132685   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.328581   23379 request.go:632] Waited for 195.818618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935
	I1202 11:49:01.328640   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935
	I1202 11:49:01.328645   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.328651   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.328659   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.332129   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:01.528887   23379 request.go:632] Waited for 196.197458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:01.528960   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:01.528968   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.528983   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.528991   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.531764   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:01.532366   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:01.532385   23379 pod_ready.go:82] duration metric: took 399.689084ms for pod "kube-apiserver-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.532395   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.729145   23379 request.go:632] Waited for 196.686289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:49:01.729214   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m02
	I1202 11:49:01.729222   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.729232   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.729241   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.732550   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:01.928940   23379 request.go:632] Waited for 195.375728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:01.929027   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:01.929039   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:01.929049   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:01.929060   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:01.932849   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:01.933394   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:01.933415   23379 pod_ready.go:82] duration metric: took 401.013286ms for pod "kube-apiserver-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:01.933428   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.128618   23379 request.go:632] Waited for 195.115216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m03
	I1202 11:49:02.128692   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-604935-m03
	I1202 11:49:02.128704   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.128714   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.128744   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.132085   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:02.328195   23379 request.go:632] Waited for 195.287157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:02.328272   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:02.328280   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.328290   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.328294   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.331350   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:02.332062   23379 pod_ready.go:93] pod "kube-apiserver-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:02.332086   23379 pod_ready.go:82] duration metric: took 398.648799ms for pod "kube-apiserver-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.332096   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.528402   23379 request.go:632] Waited for 196.237056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:49:02.528456   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935
	I1202 11:49:02.528461   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.528468   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.528471   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.531001   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:02.729030   23379 request.go:632] Waited for 197.344265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:02.729083   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:02.729088   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.729095   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.729101   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.733927   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:02.734415   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:02.734433   23379 pod_ready.go:82] duration metric: took 402.330362ms for pod "kube-controller-manager-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.734442   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:02.928547   23379 request.go:632] Waited for 194.020533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:49:02.928615   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m02
	I1202 11:49:02.928624   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:02.928634   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:02.928644   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:02.933547   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:03.128827   23379 request.go:632] Waited for 194.344486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:03.128890   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:03.128895   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.128915   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.128921   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.133610   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:03.134316   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:03.134333   23379 pod_ready.go:82] duration metric: took 399.884969ms for pod "kube-controller-manager-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.134345   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.328421   23379 request.go:632] Waited for 194.000988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m03
	I1202 11:49:03.328488   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-604935-m03
	I1202 11:49:03.328493   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.328500   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.328505   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.331240   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:03.528448   23379 request.go:632] Waited for 196.353439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.528524   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.528532   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.528542   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.528554   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.532267   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:03.532704   23379 pod_ready.go:93] pod "kube-controller-manager-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:03.532722   23379 pod_ready.go:82] duration metric: took 398.368333ms for pod "kube-controller-manager-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.532747   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rp7t2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.728896   23379 request.go:632] Waited for 196.080235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rp7t2
	I1202 11:49:03.728966   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rp7t2
	I1202 11:49:03.728972   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.728979   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.728982   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.732009   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:03.929024   23379 request.go:632] Waited for 196.282412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.929090   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:03.929096   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:03.929106   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:03.929111   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:03.932496   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:03.933154   23379 pod_ready.go:93] pod "kube-proxy-rp7t2" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:03.933174   23379 pod_ready.go:82] duration metric: took 400.416355ms for pod "kube-proxy-rp7t2" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:03.933184   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.128132   23379 request.go:632] Waited for 194.87576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:49:04.128183   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tqcb6
	I1202 11:49:04.128188   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.128196   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.128200   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.131316   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:04.328392   23379 request.go:632] Waited for 196.344562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:04.328464   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:04.328472   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.328488   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.328504   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.331622   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:04.332330   23379 pod_ready.go:93] pod "kube-proxy-tqcb6" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:04.332349   23379 pod_ready.go:82] duration metric: took 399.158434ms for pod "kube-proxy-tqcb6" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.332362   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.528404   23379 request.go:632] Waited for 195.973025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:49:04.528476   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w9r4x
	I1202 11:49:04.528485   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.528499   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.528512   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.531287   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:04.728831   23379 request.go:632] Waited for 196.723103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:04.728880   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:04.728888   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.728918   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.728926   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.731917   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:04.732716   23379 pod_ready.go:93] pod "kube-proxy-w9r4x" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:04.732733   23379 pod_ready.go:82] duration metric: took 400.363929ms for pod "kube-proxy-w9r4x" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.732741   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:04.928126   23379 request.go:632] Waited for 195.328391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:49:04.928208   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935
	I1202 11:49:04.928219   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:04.928242   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:04.928251   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:04.931908   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.129033   23379 request.go:632] Waited for 196.165096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:05.129107   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935
	I1202 11:49:05.129114   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.129124   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.129131   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.132837   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.133502   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:05.133521   23379 pod_ready.go:82] duration metric: took 400.774358ms for pod "kube-scheduler-ha-604935" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.133531   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.328705   23379 request.go:632] Waited for 195.110801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:49:05.328775   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m02
	I1202 11:49:05.328782   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.328792   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.328804   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.332423   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.528425   23379 request.go:632] Waited for 195.360611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:05.528479   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m02
	I1202 11:49:05.528484   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.528491   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.528494   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.531378   23379 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:05.531939   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:05.531957   23379 pod_ready.go:82] duration metric: took 398.419577ms for pod "kube-scheduler-ha-604935-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.531967   23379 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.728987   23379 request.go:632] Waited for 196.947438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m03
	I1202 11:49:05.729040   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-604935-m03
	I1202 11:49:05.729045   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.729052   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.729056   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.732940   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.928937   23379 request.go:632] Waited for 195.348906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:05.928990   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes/ha-604935-m03
	I1202 11:49:05.928996   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.929007   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.929023   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.932936   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:05.933995   23379 pod_ready.go:93] pod "kube-scheduler-ha-604935-m03" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:05.934013   23379 pod_ready.go:82] duration metric: took 402.03942ms for pod "kube-scheduler-ha-604935-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:05.934028   23379 pod_ready.go:39] duration metric: took 5.202257007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:49:05.934044   23379 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:49:05.934111   23379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:49:05.950308   23379 api_server.go:72] duration metric: took 21.073692026s to wait for apiserver process to appear ...
	I1202 11:49:05.950330   23379 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:49:05.950350   23379 api_server.go:253] Checking apiserver healthz at https://192.168.39.102:8443/healthz ...
	I1202 11:49:05.954392   23379 api_server.go:279] https://192.168.39.102:8443/healthz returned 200:
	ok
	I1202 11:49:05.954463   23379 round_trippers.go:463] GET https://192.168.39.102:8443/version
	I1202 11:49:05.954472   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:05.954479   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:05.954484   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:05.955264   23379 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1202 11:49:05.955324   23379 api_server.go:141] control plane version: v1.31.2
	I1202 11:49:05.955340   23379 api_server.go:131] duration metric: took 5.002951ms to wait for apiserver health ...
	I1202 11:49:05.955348   23379 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:49:06.128765   23379 request.go:632] Waited for 173.340291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.128831   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.128854   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.128868   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.128878   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.134738   23379 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:49:06.141415   23379 system_pods.go:59] 24 kube-system pods found
	I1202 11:49:06.141437   23379 system_pods.go:61] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:49:06.141442   23379 system_pods.go:61] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:49:06.141446   23379 system_pods.go:61] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:49:06.141449   23379 system_pods.go:61] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:49:06.141453   23379 system_pods.go:61] "etcd-ha-604935-m03" [2de6c192-755f-43c7-a973-b1137b03c49f] Running
	I1202 11:49:06.141457   23379 system_pods.go:61] "kindnet-j4cr6" [07287f32-1272-4735-bb43-88f862b28657] Running
	I1202 11:49:06.141461   23379 system_pods.go:61] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:49:06.141464   23379 system_pods.go:61] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:49:06.141468   23379 system_pods.go:61] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:49:06.141471   23379 system_pods.go:61] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:49:06.141475   23379 system_pods.go:61] "kube-apiserver-ha-604935-m03" [74b078f5-560f-4077-be17-91f7add9545f] Running
	I1202 11:49:06.141479   23379 system_pods.go:61] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:49:06.141487   23379 system_pods.go:61] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:49:06.141494   23379 system_pods.go:61] "kube-controller-manager-ha-604935-m03" [445254dd-244a-4f40-9a0c-362bd03686c3] Running
	I1202 11:49:06.141507   23379 system_pods.go:61] "kube-proxy-rp7t2" [84b2dba2-d1be-49b6-addc-a9d919ef683e] Running
	I1202 11:49:06.141512   23379 system_pods.go:61] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:49:06.141517   23379 system_pods.go:61] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:49:06.141523   23379 system_pods.go:61] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:49:06.141527   23379 system_pods.go:61] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:49:06.141531   23379 system_pods.go:61] "kube-scheduler-ha-604935-m03" [45cc93ef-1da2-469b-a0de-8bc9b8383094] Running
	I1202 11:49:06.141534   23379 system_pods.go:61] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:49:06.141540   23379 system_pods.go:61] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:49:06.141543   23379 system_pods.go:61] "kube-vip-ha-604935-m03" [5c5c4e09-5ad1-4b08-8ea3-84260528b78e] Running
	I1202 11:49:06.141545   23379 system_pods.go:61] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:49:06.141551   23379 system_pods.go:74] duration metric: took 186.197102ms to wait for pod list to return data ...
	I1202 11:49:06.141560   23379 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:49:06.329008   23379 request.go:632] Waited for 187.367529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:49:06.329100   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:49:06.329113   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.329125   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.329130   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.332755   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:06.332967   23379 default_sa.go:45] found service account: "default"
	I1202 11:49:06.332983   23379 default_sa.go:55] duration metric: took 191.417488ms for default service account to be created ...
	I1202 11:49:06.332991   23379 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:49:06.528293   23379 request.go:632] Waited for 195.242273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.528366   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:06.528375   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.528382   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.528388   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.533257   23379 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:49:06.539940   23379 system_pods.go:86] 24 kube-system pods found
	I1202 11:49:06.539965   23379 system_pods.go:89] "coredns-7c65d6cfc9-5gcc2" [63fea190-8001-4264-a579-13a9cae6ddff] Running
	I1202 11:49:06.539970   23379 system_pods.go:89] "coredns-7c65d6cfc9-g48q9" [66ce87a9-4918-45fd-9721-d4e6323b7b54] Running
	I1202 11:49:06.539976   23379 system_pods.go:89] "etcd-ha-604935" [975d8a6a-bc5b-4027-918b-a08b51dba058] Running
	I1202 11:49:06.539980   23379 system_pods.go:89] "etcd-ha-604935-m02" [cf1eb147-15c4-4e8b-a006-ad464eb08a7c] Running
	I1202 11:49:06.539983   23379 system_pods.go:89] "etcd-ha-604935-m03" [2de6c192-755f-43c7-a973-b1137b03c49f] Running
	I1202 11:49:06.539986   23379 system_pods.go:89] "kindnet-j4cr6" [07287f32-1272-4735-bb43-88f862b28657] Running
	I1202 11:49:06.539989   23379 system_pods.go:89] "kindnet-k99r8" [e5466844-1f48-46c2-8e34-c4bf016b9656] Running
	I1202 11:49:06.539995   23379 system_pods.go:89] "kindnet-l55rq" [f6e00384-f7e5-49ce-ad05-f9de89a25d7f] Running
	I1202 11:49:06.539998   23379 system_pods.go:89] "kube-apiserver-ha-604935" [a460bc27-8af0-4958-9d3a-4b96773893a3] Running
	I1202 11:49:06.540002   23379 system_pods.go:89] "kube-apiserver-ha-604935-m02" [23f6728e-073f-4ce4-a707-3817710b49fe] Running
	I1202 11:49:06.540006   23379 system_pods.go:89] "kube-apiserver-ha-604935-m03" [74b078f5-560f-4077-be17-91f7add9545f] Running
	I1202 11:49:06.540009   23379 system_pods.go:89] "kube-controller-manager-ha-604935" [7ea1d4fd-527e-4734-818b-71c55b3c4693] Running
	I1202 11:49:06.540013   23379 system_pods.go:89] "kube-controller-manager-ha-604935-m02" [588d55f1-b51e-4169-bdbb-b536ab420894] Running
	I1202 11:49:06.540016   23379 system_pods.go:89] "kube-controller-manager-ha-604935-m03" [445254dd-244a-4f40-9a0c-362bd03686c3] Running
	I1202 11:49:06.540020   23379 system_pods.go:89] "kube-proxy-rp7t2" [84b2dba2-d1be-49b6-addc-a9d919ef683e] Running
	I1202 11:49:06.540024   23379 system_pods.go:89] "kube-proxy-tqcb6" [d576fbb5-bee1-4482-82f5-b21a5e1e65f9] Running
	I1202 11:49:06.540028   23379 system_pods.go:89] "kube-proxy-w9r4x" [4131636c-d2a2-4aa3-aff6-aa77b517af72] Running
	I1202 11:49:06.540034   23379 system_pods.go:89] "kube-scheduler-ha-604935" [17934eac-3356-4060-b485-5eeecfca13b6] Running
	I1202 11:49:06.540037   23379 system_pods.go:89] "kube-scheduler-ha-604935-m02" [ad12ea88-561b-4ee1-90de-4c6b0185af02] Running
	I1202 11:49:06.540040   23379 system_pods.go:89] "kube-scheduler-ha-604935-m03" [45cc93ef-1da2-469b-a0de-8bc9b8383094] Running
	I1202 11:49:06.540043   23379 system_pods.go:89] "kube-vip-ha-604935" [8001b094-839e-42ea-82a8-76730b6657fc] Running
	I1202 11:49:06.540046   23379 system_pods.go:89] "kube-vip-ha-604935-m02" [12f222eb-a19b-42e0-b398-919847c3e224] Running
	I1202 11:49:06.540049   23379 system_pods.go:89] "kube-vip-ha-604935-m03" [5c5c4e09-5ad1-4b08-8ea3-84260528b78e] Running
	I1202 11:49:06.540053   23379 system_pods.go:89] "storage-provisioner" [1023dda9-1199-4200-9b82-bb054a0eedff] Running
	I1202 11:49:06.540058   23379 system_pods.go:126] duration metric: took 207.062281ms to wait for k8s-apps to be running ...
	I1202 11:49:06.540068   23379 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:49:06.540106   23379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:49:06.555319   23379 system_svc.go:56] duration metric: took 15.24289ms WaitForService to wait for kubelet
	I1202 11:49:06.555341   23379 kubeadm.go:582] duration metric: took 21.678727669s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:49:06.555356   23379 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:49:06.728222   23379 request.go:632] Waited for 172.787542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.102:8443/api/v1/nodes
	I1202 11:49:06.728311   23379 round_trippers.go:463] GET https://192.168.39.102:8443/api/v1/nodes
	I1202 11:49:06.728317   23379 round_trippers.go:469] Request Headers:
	I1202 11:49:06.728327   23379 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:06.728332   23379 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:06.731784   23379 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:06.733040   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:49:06.733062   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:49:06.733074   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:49:06.733079   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:49:06.733084   23379 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 11:49:06.733088   23379 node_conditions.go:123] node cpu capacity is 2
	I1202 11:49:06.733094   23379 node_conditions.go:105] duration metric: took 177.727321ms to run NodePressure ...
	I1202 11:49:06.733107   23379 start.go:241] waiting for startup goroutines ...
	I1202 11:49:06.733138   23379 start.go:255] writing updated cluster config ...
	I1202 11:49:06.733452   23379 ssh_runner.go:195] Run: rm -f paused
	I1202 11:49:06.787558   23379 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 11:49:06.789249   23379 out.go:177] * Done! kubectl is now configured to use "ha-604935" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.749364973Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140382749346349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19fc1a96-f29d-4706-a136-c154450c2fed name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.749944626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbf51517-9002-4875-bed7-9528ead7d958 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.750008487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbf51517-9002-4875-bed7-9528ead7d958 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.750247051Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbf51517-9002-4875-bed7-9528ead7d958 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.790309516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aeca8a7e-d0e0-490b-b066-b34f3aa121cb name=/runtime.v1.RuntimeService/Version
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.790408028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aeca8a7e-d0e0-490b-b066-b34f3aa121cb name=/runtime.v1.RuntimeService/Version
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.791684067Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04c079d5-9c0e-4a99-b87f-275b7be405d6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.792117706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140382792095885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04c079d5-9c0e-4a99-b87f-275b7be405d6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.792845605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6acab0ad-6e27-4cae-a037-0a076d159d19 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.792919255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6acab0ad-6e27-4cae-a037-0a076d159d19 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.793162089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6acab0ad-6e27-4cae-a037-0a076d159d19 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.835408944Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df77c245-c886-41b4-aa3d-6dd2a71ce444 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.835532409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df77c245-c886-41b4-aa3d-6dd2a71ce444 name=/runtime.v1.RuntimeService/Version
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.836572996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55c38a1a-468b-4330-8856-b599cf745945 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.837005648Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140382836985604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55c38a1a-468b-4330-8856-b599cf745945 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.837491811Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f861f5f8-7497-4031-817e-983ed2f6eb98 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.837543242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f861f5f8-7497-4031-817e-983ed2f6eb98 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.837760213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f861f5f8-7497-4031-817e-983ed2f6eb98 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.884252128Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e0fdc93-b0b3-4426-9f35-e3aba765198e name=/runtime.v1.RuntimeService/Version
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.884345457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e0fdc93-b0b3-4426-9f35-e3aba765198e name=/runtime.v1.RuntimeService/Version
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.885776588Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=812da1aa-3a77-4cdb-9890-84ad26ecca53 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.886371924Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140382886345210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=812da1aa-3a77-4cdb-9890-84ad26ecca53 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.887038558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37307bfa-cd69-480a-ad16-749c7eabf621 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.887090481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37307bfa-cd69-480a-ad16-749c7eabf621 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 11:53:02 ha-604935 crio[658]: time="2024-12-02 11:53:02.887311375Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:27068dc5178bb512a4201c24f112921dbc69a54b3de445e99361ebc6e1a8a04f,PodSandboxId:1f0c13e6637482c6966b90bbb60cbca86c19868fc516fc4629ade2bdd4d9d483,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733140150654796776,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8jxc4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818,PodSandboxId:72cc1a04d89653843bc160c98ae01ee6cfedc5ebacddb229ccbc63780f4d4a8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013667994174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-g48q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ce87a9-4918-45fd-9721-d4e6323b7b54,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f,PodSandboxId:abbb2caf2ff000773d6503f131a21319cdc0d6f3b1a7de587abd56ee02938284,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733140013620565576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5gcc2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
63fea190-8001-4264-a579-13a9cae6ddff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b,PodSandboxId:40752b9892351bc75b6cd91484757f7e6db0dee641b6cec65a7a060689d4d399,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733140013551564880,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1023dda9-1199-4200-9b82-bb054a0eedff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10,PodSandboxId:646eade60f2d2dcaf079e431f708d6e5a90f86fe0b30f7532a19a86300804e0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733140001813641457,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k99r8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5466844-1f48-46c2-8e34-c4bf016b9656,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73,PodSandboxId:8ba57f92e62cdae2610e4c7de39af04f147c7408788863e924430b915e899841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139999
459988124,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tqcb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576fbb5-bee1-4482-82f5-b21a5e1e65f9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7,PodSandboxId:096eb67e8b05d2e019fcdc59cf90012e96bb133ed4fc518272d8555bd7691a25,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812,State:CONTAINER_RUNNING,CreatedAt:173313998981
2208624,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e34a31690bf4b94086a296305429f2bd,},Annotations:map[string]string{io.kubernetes.container.hash: 99626828,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46,PodSandboxId:ec95830bfe24d0d7dfd624643acd637426c116f403b236eac303aa9a8e5cad6b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139988049859682,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e46709c5369afc1ad72a60c327e7e03,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35,PodSandboxId:1989811c4f393a68d87ce80bb90cddf7b4f15803a5911413673a62e3ef62d0c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139988057643481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3795b7eb129e1555193fc4481f415c61,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41,PodSandboxId:8978121739b66eb0f04f3ea853d27ab99bea0dbddd40802eb2923658ee3fcc94,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139988061573443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 367ab693a9f84a18356ae64542b127be,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6,PodSandboxId:fc4151eee5a3f36f12aa7fd157e39d3deeb59d8ae53fe81d898ae54b294c0cbd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139988034933453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-604935,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1298b086a2bd0a1c4a6a3d5c72224eab,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37307bfa-cd69-480a-ad16-749c7eabf621 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	27068dc5178bb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   1f0c13e663748       busybox-7dff88458-8jxc4
	be0c4adffd61b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   72cc1a04d8965       coredns-7c65d6cfc9-g48q9
	91c90e9d05cf7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   abbb2caf2ff00       coredns-7c65d6cfc9-5gcc2
	9d7d77b59569b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   40752b9892351       storage-provisioner
	579b11920d9fd       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   646eade60f2d2       kindnet-k99r8
	f6a700874f779       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   8ba57f92e62cd       kube-proxy-tqcb6
	17bfa0393f187       ghcr.io/kube-vip/kube-vip@sha256:1efe86893baf2d3c97c452fa53641ba647553ed0e639db69e56473d5a238462e     6 minutes ago       Running             kube-vip                  0                   096eb67e8b05d       kube-vip-ha-604935
	275d716cfd4f7       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   8978121739b66       kube-controller-manager-ha-604935
	090e4a0254277       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   1989811c4f393       kube-scheduler-ha-604935
	53184ed95349a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   ec95830bfe24d       etcd-ha-604935
	9624bba327f9b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   fc4151eee5a3f       kube-apiserver-ha-604935
	
	
	==> coredns [91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f] <==
	[INFO] 10.244.0.4:39323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215731s
	[INFO] 10.244.0.4:33525 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162613s
	[INFO] 10.244.0.4:39123 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125815s
	[INFO] 10.244.0.4:37376 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000244786s
	[INFO] 10.244.2.2:44210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174232s
	[INFO] 10.244.2.2:54748 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001765833s
	[INFO] 10.244.2.2:60174 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000284786s
	[INFO] 10.244.2.2:50584 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109022s
	[INFO] 10.244.2.2:34854 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001186229s
	[INFO] 10.244.2.2:42659 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081441s
	[INFO] 10.244.2.2:51018 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119851s
	[INFO] 10.244.1.2:51189 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001371264s
	[INFO] 10.244.1.2:57162 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158703s
	[INFO] 10.244.0.4:59693 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068002s
	[INFO] 10.244.0.4:51163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042042s
	[INFO] 10.244.2.2:40625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117188s
	[INFO] 10.244.1.2:49002 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091339s
	[INFO] 10.244.1.2:42507 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192925s
	[INFO] 10.244.0.4:36452 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215238s
	[INFO] 10.244.0.4:41389 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00010969s
	[INFO] 10.244.2.2:55194 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000180309s
	[INFO] 10.244.2.2:45875 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109142s
	[INFO] 10.244.1.2:42301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164839s
	[INFO] 10.244.1.2:47133 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176562s
	[INFO] 10.244.1.2:42848 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122646s
	
	
	==> coredns [be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818] <==
	[INFO] 10.244.1.2:33047 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000108391s
	[INFO] 10.244.1.2:40927 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001980013s
	[INFO] 10.244.0.4:37566 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004168289s
	[INFO] 10.244.0.4:36737 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252503s
	[INFO] 10.244.0.4:33046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003375406s
	[INFO] 10.244.0.4:42598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177128s
	[INFO] 10.244.2.2:46358 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148802s
	[INFO] 10.244.1.2:55837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194128s
	[INFO] 10.244.1.2:55278 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002096061s
	[INFO] 10.244.1.2:45640 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141771s
	[INFO] 10.244.1.2:36834 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000204172s
	[INFO] 10.244.1.2:41503 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00026722s
	[INFO] 10.244.1.2:46043 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001413s
	[INFO] 10.244.0.4:37544 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011909s
	[INFO] 10.244.0.4:58597 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007644s
	[INFO] 10.244.2.2:41510 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179912s
	[INFO] 10.244.2.2:41733 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013607s
	[INFO] 10.244.2.2:57759 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000205972s
	[INFO] 10.244.1.2:54620 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248357s
	[INFO] 10.244.1.2:40630 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109148s
	[INFO] 10.244.0.4:39309 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113844s
	[INFO] 10.244.0.4:42691 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170784s
	[INFO] 10.244.2.2:41138 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112783s
	[INFO] 10.244.2.2:32778 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073017s
	[INFO] 10.244.1.2:42298 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00018329s
	
	
	==> describe nodes <==
	Name:               ha-604935
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T11_46_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:46:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:53:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:49:38 +0000   Mon, 02 Dec 2024 11:46:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-604935
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4653179aa8d04165a06718969a078842
	  System UUID:                4653179a-a8d0-4165-a067-18969a078842
	  Boot ID:                    059fb5e8-3774-458b-bfbf-8364817017d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8jxc4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 coredns-7c65d6cfc9-5gcc2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m25s
	  kube-system                 coredns-7c65d6cfc9-g48q9             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m25s
	  kube-system                 etcd-ha-604935                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m29s
	  kube-system                 kindnet-k99r8                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m25s
	  kube-system                 kube-apiserver-ha-604935             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-controller-manager-ha-604935    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-proxy-tqcb6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-scheduler-ha-604935             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-vip-ha-604935                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m23s                  kube-proxy       
	  Normal  Starting                 6m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m29s (x2 over 6m29s)  kubelet          Node ha-604935 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s (x2 over 6m29s)  kubelet          Node ha-604935 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m29s (x2 over 6m29s)  kubelet          Node ha-604935 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m26s                  node-controller  Node ha-604935 event: Registered Node ha-604935 in Controller
	  Normal  NodeReady                6m10s                  kubelet          Node ha-604935 status is now: NodeReady
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-604935 event: Registered Node ha-604935 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-604935 event: Registered Node ha-604935 in Controller
	
	
	Name:               ha-604935-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_47_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:47:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:50:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 02 Dec 2024 11:49:33 +0000   Mon, 02 Dec 2024 11:51:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.96
	  Hostname:    ha-604935-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0f21093f5748416fa30ea8181c31a3f7
	  System UUID:                0f21093f-5748-416f-a30e-a8181c31a3f7
	  Boot ID:                    5621b6a5-bb1a-408d-b692-10c4aad4b418
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xbb9t                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-ha-604935-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m31s
	  kube-system                 kindnet-l55rq                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m33s
	  kube-system                 kube-apiserver-ha-604935-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-controller-manager-ha-604935-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-proxy-w9r4x                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-scheduler-ha-604935-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-vip-ha-604935-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m33s)  kubelet          Node ha-604935-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m33s)  kubelet          Node ha-604935-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m33s)  kubelet          Node ha-604935-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-604935-m02 event: Registered Node ha-604935-m02 in Controller
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-604935-m02 event: Registered Node ha-604935-m02 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-604935-m02 event: Registered Node ha-604935-m02 in Controller
	  Normal  NodeNotReady             118s                   node-controller  Node ha-604935-m02 status is now: NodeNotReady
	
	
	Name:               ha-604935-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_48_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:48:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:52:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:48:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:48:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:48:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:49:11 +0000   Mon, 02 Dec 2024 11:49:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    ha-604935-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8588450b38914bf3ac287b253d72fb4d
	  System UUID:                8588450b-3891-4bf3-ac28-7b253d72fb4d
	  Boot ID:                    735a98f4-21e5-4433-a99b-76bab3cbd392
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-l5kq7                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-ha-604935-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m21s
	  kube-system                 kindnet-j4cr6                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m23s
	  kube-system                 kube-apiserver-ha-604935-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-ha-604935-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-proxy-rp7t2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-scheduler-ha-604935-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-vip-ha-604935-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x8 over 4m23s)  kubelet          Node ha-604935-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x8 over 4m23s)  kubelet          Node ha-604935-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x7 over 4m23s)  kubelet          Node ha-604935-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-604935-m03 event: Registered Node ha-604935-m03 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-604935-m03 event: Registered Node ha-604935-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-604935-m03 event: Registered Node ha-604935-m03 in Controller
	
	
	Name:               ha-604935-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-604935-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-604935
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_49_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:49:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-604935-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:52:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:49:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:49:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:49:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:50:15 +0000   Mon, 02 Dec 2024 11:50:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    ha-604935-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 577fefe5032840e68ccf6ba2b6fbcf44
	  System UUID:                577fefe5-0328-40e6-8ccf-6ba2b6fbcf44
	  Boot ID:                    5f3dbc6d-6884-49f4-acef-8235bb29f467
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwxsc       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m18s
	  kube-system                 kube-proxy-v649d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m19s (x2 over 3m19s)  kubelet          Node ha-604935-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m19s (x2 over 3m19s)  kubelet          Node ha-604935-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m19s (x2 over 3m19s)  kubelet          Node ha-604935-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m18s                  cidrAllocator    Node ha-604935-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-604935-m04 event: Registered Node ha-604935-m04 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-604935-m04 event: Registered Node ha-604935-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-604935-m04 event: Registered Node ha-604935-m04 in Controller
	  Normal  NodeReady                2m59s                  kubelet          Node ha-604935-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 2 11:45] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051551] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040036] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec 2 11:46] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.564296] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.579239] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.318373] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.060168] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057883] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.148672] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.135107] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.277991] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.959381] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +4.016173] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.058991] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.327237] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.069565] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.092272] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.163087] kauditd_printk_skb: 38 callbacks suppressed
	[Dec 2 11:47] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46] <==
	{"level":"warn","ts":"2024-12-02T11:53:03.069002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.080697Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.137588Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.143804Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.147278Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.157747Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.164226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.168210Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.169927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.174532Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.178275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.183136Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.187490Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.193019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.205968Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.209050Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.211987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.253651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.260070Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.269483Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.287561Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.96:2380/version","remote-member-id":"9af0cb01a3f64a4e","error":"Get \"https://192.168.39.96:2380/version\": dial tcp 192.168.39.96:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-12-02T11:53:03.287607Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9af0cb01a3f64a4e","error":"Get \"https://192.168.39.96:2380/version\": dial tcp 192.168.39.96:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-12-02T11:53:03.334686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.340846Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-02T11:53:03.369066Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b93c4bc4617b0fe","from":"6b93c4bc4617b0fe","remote-peer-id":"9af0cb01a3f64a4e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:53:03 up 7 min,  0 users,  load average: 0.60, 0.42, 0.19
	Linux ha-604935 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10] <==
	I1202 11:52:32.903249       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	I1202 11:52:42.901238       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1202 11:52:42.901327       1 main.go:301] handling current node
	I1202 11:52:42.901361       1 main.go:297] Handling node with IPs: map[192.168.39.96:{}]
	I1202 11:52:42.901380       1 main.go:324] Node ha-604935-m02 has CIDR [10.244.1.0/24] 
	I1202 11:52:42.901720       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1202 11:52:42.901758       1 main.go:324] Node ha-604935-m03 has CIDR [10.244.2.0/24] 
	I1202 11:52:42.903817       1 main.go:297] Handling node with IPs: map[192.168.39.26:{}]
	I1202 11:52:42.903856       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	I1202 11:52:52.900618       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1202 11:52:52.900723       1 main.go:301] handling current node
	I1202 11:52:52.900742       1 main.go:297] Handling node with IPs: map[192.168.39.96:{}]
	I1202 11:52:52.900750       1 main.go:324] Node ha-604935-m02 has CIDR [10.244.1.0/24] 
	I1202 11:52:52.901396       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1202 11:52:52.901501       1 main.go:324] Node ha-604935-m03 has CIDR [10.244.2.0/24] 
	I1202 11:52:52.901876       1 main.go:297] Handling node with IPs: map[192.168.39.26:{}]
	I1202 11:52:52.901972       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	I1202 11:53:02.909551       1 main.go:297] Handling node with IPs: map[192.168.39.26:{}]
	I1202 11:53:02.909577       1 main.go:324] Node ha-604935-m04 has CIDR [10.244.3.0/24] 
	I1202 11:53:02.909774       1 main.go:297] Handling node with IPs: map[192.168.39.102:{}]
	I1202 11:53:02.909781       1 main.go:301] handling current node
	I1202 11:53:02.909793       1 main.go:297] Handling node with IPs: map[192.168.39.96:{}]
	I1202 11:53:02.909796       1 main.go:324] Node ha-604935-m02 has CIDR [10.244.1.0/24] 
	I1202 11:53:02.909927       1 main.go:297] Handling node with IPs: map[192.168.39.211:{}]
	I1202 11:53:02.909933       1 main.go:324] Node ha-604935-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6] <==
	I1202 11:46:32.842650       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1202 11:46:32.848385       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102]
	I1202 11:46:32.849164       1 controller.go:615] quota admission added evaluator for: endpoints
	I1202 11:46:32.859606       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 11:46:33.159098       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1202 11:46:34.294370       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1202 11:46:34.315176       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	http2: server: error reading preface from client 192.168.39.254:47786: read tcp 192.168.39.254:8443->192.168.39.254:47786: read: connection reset by peer
	I1202 11:46:34.492102       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1202 11:46:38.758671       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1202 11:46:38.805955       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1202 11:49:11.846753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54452: use of closed network connection
	E1202 11:49:12.028104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54460: use of closed network connection
	E1202 11:49:12.199806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54474: use of closed network connection
	E1202 11:49:12.392612       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54484: use of closed network connection
	E1202 11:49:12.562047       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54506: use of closed network connection
	E1202 11:49:12.747509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54530: use of closed network connection
	E1202 11:49:12.939816       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54544: use of closed network connection
	E1202 11:49:13.121199       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54562: use of closed network connection
	E1202 11:49:13.295085       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54584: use of closed network connection
	E1202 11:49:13.578607       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54612: use of closed network connection
	E1202 11:49:13.757972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54638: use of closed network connection
	E1202 11:49:14.099757       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54676: use of closed network connection
	E1202 11:49:14.269710       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54694: use of closed network connection
	E1202 11:49:14.441652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54710: use of closed network connection
	
	
	==> kube-controller-manager [275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41] <==
	I1202 11:49:45.139269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.144540       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.233566       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.349805       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:45.679160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:47.939032       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-604935-m04"
	I1202 11:49:47.939241       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:47.969287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:49.605926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:49.681129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:49:55.357132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:04.214872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:04.215953       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-604935-m04"
	I1202 11:50:04.236833       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:04.619357       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:50:15.555711       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m04"
	I1202 11:51:05.313473       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	I1202 11:51:05.313596       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-604935-m04"
	I1202 11:51:05.338955       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	I1202 11:51:05.387666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.010033ms"
	I1202 11:51:05.388828       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.832µs"
	I1202 11:51:05.441675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.06791ms"
	I1202 11:51:05.442993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.629µs"
	I1202 11:51:07.990253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	I1202 11:51:10.625653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-604935-m02"
	
	
	==> kube-proxy [f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1202 11:46:39.991996       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1202 11:46:40.020254       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.102"]
	E1202 11:46:40.020650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 11:46:40.086409       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1202 11:46:40.086557       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 11:46:40.086602       1 server_linux.go:169] "Using iptables Proxier"
	I1202 11:46:40.089997       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 11:46:40.090696       1 server.go:483] "Version info" version="v1.31.2"
	I1202 11:46:40.090739       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 11:46:40.096206       1 config.go:199] "Starting service config controller"
	I1202 11:46:40.096522       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 11:46:40.096732       1 config.go:105] "Starting endpoint slice config controller"
	I1202 11:46:40.096763       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 11:46:40.098314       1 config.go:328] "Starting node config controller"
	I1202 11:46:40.099010       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 11:46:40.196939       1 shared_informer.go:320] Caches are synced for service config
	I1202 11:46:40.197006       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 11:46:40.199281       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35] <==
	W1202 11:46:32.142852       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1202 11:46:32.142937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.153652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 11:46:32.153702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.221641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 11:46:32.221961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.358170       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1202 11:46:32.358291       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1202 11:46:32.429924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 11:46:32.430007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.430758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 11:46:32.430825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.449596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 11:46:32.449697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:46:32.505859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1202 11:46:32.505943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1202 11:46:34.815786       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1202 11:49:07.673886       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xbb9t\": pod busybox-7dff88458-xbb9t is already assigned to node \"ha-604935-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-xbb9t" node="ha-604935-m02"
	E1202 11:49:07.674510       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fc236bbd-f34b-454f-a66d-b369cd19cf9d(default/busybox-7dff88458-xbb9t) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-xbb9t"
	E1202 11:49:07.674758       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8jxc4\": pod busybox-7dff88458-8jxc4 is already assigned to node \"ha-604935\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8jxc4" node="ha-604935"
	E1202 11:49:07.675368       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f16f45f6-486f-4d2b-a2c6-f4a04b9c0eeb(default/busybox-7dff88458-8jxc4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8jxc4"
	E1202 11:49:07.675694       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8jxc4\": pod busybox-7dff88458-8jxc4 is already assigned to node \"ha-604935\"" pod="default/busybox-7dff88458-8jxc4"
	I1202 11:49:07.676018       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8jxc4" node="ha-604935"
	E1202 11:49:07.678080       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xbb9t\": pod busybox-7dff88458-xbb9t is already assigned to node \"ha-604935-m02\"" pod="default/busybox-7dff88458-xbb9t"
	I1202 11:49:07.679000       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-xbb9t" node="ha-604935-m02"
	
	
	==> kubelet <==
	Dec 02 11:51:34 ha-604935 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 11:51:34 ha-604935 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 11:51:34 ha-604935 kubelet[1316]: E1202 11:51:34.518783    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140294518371858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:34 ha-604935 kubelet[1316]: E1202 11:51:34.518905    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140294518371858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:44 ha-604935 kubelet[1316]: E1202 11:51:44.520250    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140304520009698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:44 ha-604935 kubelet[1316]: E1202 11:51:44.520275    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140304520009698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:54 ha-604935 kubelet[1316]: E1202 11:51:54.524305    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140314523474300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:54 ha-604935 kubelet[1316]: E1202 11:51:54.524384    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140314523474300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:04 ha-604935 kubelet[1316]: E1202 11:52:04.526662    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140324526379785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:04 ha-604935 kubelet[1316]: E1202 11:52:04.526711    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140324526379785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:14 ha-604935 kubelet[1316]: E1202 11:52:14.527977    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140334527643926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:14 ha-604935 kubelet[1316]: E1202 11:52:14.528325    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140334527643926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:24 ha-604935 kubelet[1316]: E1202 11:52:24.530019    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140344529552485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:24 ha-604935 kubelet[1316]: E1202 11:52:24.530407    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140344529552485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:34 ha-604935 kubelet[1316]: E1202 11:52:34.436289    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 02 11:52:34 ha-604935 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 11:52:34 ha-604935 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 11:52:34 ha-604935 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 11:52:34 ha-604935 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 11:52:34 ha-604935 kubelet[1316]: E1202 11:52:34.531571    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140354531272131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:34 ha-604935 kubelet[1316]: E1202 11:52:34.531618    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140354531272131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:44 ha-604935 kubelet[1316]: E1202 11:52:44.532768    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140364532554842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:44 ha-604935 kubelet[1316]: E1202 11:52:44.532808    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140364532554842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:54 ha-604935 kubelet[1316]: E1202 11:52:54.535693    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140374535334388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:52:54 ha-604935 kubelet[1316]: E1202 11:52:54.535796    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140374535334388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-604935 -n ha-604935
helpers_test.go:261: (dbg) Run:  kubectl --context ha-604935 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (407.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-604935 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-604935 -v=7 --alsologtostderr
E1202 11:55:01.370494   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-604935 -v=7 --alsologtostderr: exit status 82 (2m1.859282613s)

                                                
                                                
-- stdout --
	* Stopping node "ha-604935-m04"  ...
	* Stopping node "ha-604935-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 11:53:04.358009   28615 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:53:04.358148   28615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:53:04.358158   28615 out.go:358] Setting ErrFile to fd 2...
	I1202 11:53:04.358162   28615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:53:04.358318   28615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:53:04.358509   28615 out.go:352] Setting JSON to false
	I1202 11:53:04.358598   28615 mustload.go:65] Loading cluster: ha-604935
	I1202 11:53:04.359096   28615 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:53:04.359237   28615 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:53:04.359547   28615 mustload.go:65] Loading cluster: ha-604935
	I1202 11:53:04.359704   28615 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:53:04.359740   28615 stop.go:39] StopHost: ha-604935-m04
	I1202 11:53:04.360131   28615 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:53:04.360182   28615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:53:04.374762   28615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I1202 11:53:04.375257   28615 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:53:04.375911   28615 main.go:141] libmachine: Using API Version  1
	I1202 11:53:04.375939   28615 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:53:04.376291   28615 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:53:04.378418   28615 out.go:177] * Stopping node "ha-604935-m04"  ...
	I1202 11:53:04.379543   28615 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1202 11:53:04.379565   28615 main.go:141] libmachine: (ha-604935-m04) Calling .DriverName
	I1202 11:53:04.379750   28615 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1202 11:53:04.379771   28615 main.go:141] libmachine: (ha-604935-m04) Calling .GetSSHHostname
	I1202 11:53:04.382326   28615 main.go:141] libmachine: (ha-604935-m04) DBG | domain ha-604935-m04 has defined MAC address 52:54:00:0e:58:78 in network mk-ha-604935
	I1202 11:53:04.382724   28615 main.go:141] libmachine: (ha-604935-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:58:78", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:49:30 +0000 UTC Type:0 Mac:52:54:00:0e:58:78 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-604935-m04 Clientid:01:52:54:00:0e:58:78}
	I1202 11:53:04.382750   28615 main.go:141] libmachine: (ha-604935-m04) DBG | domain ha-604935-m04 has defined IP address 192.168.39.26 and MAC address 52:54:00:0e:58:78 in network mk-ha-604935
	I1202 11:53:04.382840   28615 main.go:141] libmachine: (ha-604935-m04) Calling .GetSSHPort
	I1202 11:53:04.383001   28615 main.go:141] libmachine: (ha-604935-m04) Calling .GetSSHKeyPath
	I1202 11:53:04.383146   28615 main.go:141] libmachine: (ha-604935-m04) Calling .GetSSHUsername
	I1202 11:53:04.383306   28615 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m04/id_rsa Username:docker}
	I1202 11:53:04.473023   28615 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1202 11:53:04.526288   28615 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1202 11:53:04.578512   28615 main.go:141] libmachine: Stopping "ha-604935-m04"...
	I1202 11:53:04.578554   28615 main.go:141] libmachine: (ha-604935-m04) Calling .GetState
	I1202 11:53:04.579862   28615 main.go:141] libmachine: (ha-604935-m04) Calling .Stop
	I1202 11:53:04.583238   28615 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 0/120
	I1202 11:53:05.766994   28615 main.go:141] libmachine: (ha-604935-m04) Calling .GetState
	I1202 11:53:05.768251   28615 main.go:141] libmachine: Machine "ha-604935-m04" was stopped.
	I1202 11:53:05.768270   28615 stop.go:75] duration metric: took 1.388731879s to stop
	I1202 11:53:05.768288   28615 stop.go:39] StopHost: ha-604935-m03
	I1202 11:53:05.768574   28615 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:53:05.768610   28615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:53:05.782508   28615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41537
	I1202 11:53:05.782936   28615 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:53:05.783367   28615 main.go:141] libmachine: Using API Version  1
	I1202 11:53:05.783389   28615 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:53:05.783681   28615 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:53:05.785251   28615 out.go:177] * Stopping node "ha-604935-m03"  ...
	I1202 11:53:05.786282   28615 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1202 11:53:05.786299   28615 main.go:141] libmachine: (ha-604935-m03) Calling .DriverName
	I1202 11:53:05.786491   28615 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1202 11:53:05.786512   28615 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHHostname
	I1202 11:53:05.789073   28615 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:53:05.789404   28615 main.go:141] libmachine: (ha-604935-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:c4:59", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:48:10 +0000 UTC Type:0 Mac:52:54:00:56:c4:59 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-604935-m03 Clientid:01:52:54:00:56:c4:59}
	I1202 11:53:05.789430   28615 main.go:141] libmachine: (ha-604935-m03) DBG | domain ha-604935-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:56:c4:59 in network mk-ha-604935
	I1202 11:53:05.789554   28615 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHPort
	I1202 11:53:05.789702   28615 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHKeyPath
	I1202 11:53:05.789820   28615 main.go:141] libmachine: (ha-604935-m03) Calling .GetSSHUsername
	I1202 11:53:05.789937   28615 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m03/id_rsa Username:docker}
	I1202 11:53:05.876812   28615 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1202 11:53:05.930827   28615 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1202 11:53:05.993013   28615 main.go:141] libmachine: Stopping "ha-604935-m03"...
	I1202 11:53:05.993038   28615 main.go:141] libmachine: (ha-604935-m03) Calling .GetState
	I1202 11:53:05.994588   28615 main.go:141] libmachine: (ha-604935-m03) Calling .Stop
	I1202 11:53:05.997907   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 0/120
	I1202 11:53:06.999240   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 1/120
	I1202 11:53:08.000530   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 2/120
	I1202 11:53:09.002011   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 3/120
	I1202 11:53:10.003215   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 4/120
	I1202 11:53:11.005022   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 5/120
	I1202 11:53:12.006579   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 6/120
	I1202 11:53:13.007729   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 7/120
	I1202 11:53:14.009328   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 8/120
	I1202 11:53:15.010575   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 9/120
	I1202 11:53:16.012663   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 10/120
	I1202 11:53:17.013935   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 11/120
	I1202 11:53:18.015276   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 12/120
	I1202 11:53:19.016907   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 13/120
	I1202 11:53:20.018916   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 14/120
	I1202 11:53:21.020956   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 15/120
	I1202 11:53:22.022375   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 16/120
	I1202 11:53:23.023743   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 17/120
	I1202 11:53:24.025263   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 18/120
	I1202 11:53:25.026600   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 19/120
	I1202 11:53:26.028098   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 20/120
	I1202 11:53:27.029951   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 21/120
	I1202 11:53:28.031434   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 22/120
	I1202 11:53:29.033515   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 23/120
	I1202 11:53:30.034953   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 24/120
	I1202 11:53:31.036762   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 25/120
	I1202 11:53:32.038596   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 26/120
	I1202 11:53:33.040054   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 27/120
	I1202 11:53:34.041375   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 28/120
	I1202 11:53:35.043152   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 29/120
	I1202 11:53:36.044793   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 30/120
	I1202 11:53:37.045920   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 31/120
	I1202 11:53:38.047246   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 32/120
	I1202 11:53:39.048457   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 33/120
	I1202 11:53:40.049742   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 34/120
	I1202 11:53:41.051294   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 35/120
	I1202 11:53:42.052469   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 36/120
	I1202 11:53:43.053566   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 37/120
	I1202 11:53:44.054688   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 38/120
	I1202 11:53:45.056788   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 39/120
	I1202 11:53:46.058293   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 40/120
	I1202 11:53:47.059715   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 41/120
	I1202 11:53:48.060913   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 42/120
	I1202 11:53:49.062227   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 43/120
	I1202 11:53:50.063373   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 44/120
	I1202 11:53:51.065027   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 45/120
	I1202 11:53:52.066330   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 46/120
	I1202 11:53:53.067700   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 47/120
	I1202 11:53:54.068997   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 48/120
	I1202 11:53:55.070211   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 49/120
	I1202 11:53:56.071307   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 50/120
	I1202 11:53:57.072705   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 51/120
	I1202 11:53:58.073840   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 52/120
	I1202 11:53:59.075067   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 53/120
	I1202 11:54:00.076222   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 54/120
	I1202 11:54:01.077754   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 55/120
	I1202 11:54:02.078878   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 56/120
	I1202 11:54:03.080006   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 57/120
	I1202 11:54:04.081310   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 58/120
	I1202 11:54:05.082501   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 59/120
	I1202 11:54:06.084165   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 60/120
	I1202 11:54:07.085320   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 61/120
	I1202 11:54:08.086641   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 62/120
	I1202 11:54:09.087755   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 63/120
	I1202 11:54:10.089000   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 64/120
	I1202 11:54:11.090637   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 65/120
	I1202 11:54:12.092264   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 66/120
	I1202 11:54:13.093460   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 67/120
	I1202 11:54:14.094790   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 68/120
	I1202 11:54:15.096035   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 69/120
	I1202 11:54:16.097638   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 70/120
	I1202 11:54:17.098824   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 71/120
	I1202 11:54:18.100127   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 72/120
	I1202 11:54:19.101445   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 73/120
	I1202 11:54:20.102779   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 74/120
	I1202 11:54:21.104479   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 75/120
	I1202 11:54:22.106453   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 76/120
	I1202 11:54:23.107742   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 77/120
	I1202 11:54:24.108891   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 78/120
	I1202 11:54:25.110564   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 79/120
	I1202 11:54:26.112128   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 80/120
	I1202 11:54:27.113287   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 81/120
	I1202 11:54:28.114452   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 82/120
	I1202 11:54:29.115584   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 83/120
	I1202 11:54:30.116764   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 84/120
	I1202 11:54:31.118000   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 85/120
	I1202 11:54:32.119108   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 86/120
	I1202 11:54:33.120284   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 87/120
	I1202 11:54:34.121319   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 88/120
	I1202 11:54:35.122445   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 89/120
	I1202 11:54:36.124099   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 90/120
	I1202 11:54:37.125279   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 91/120
	I1202 11:54:38.126675   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 92/120
	I1202 11:54:39.127958   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 93/120
	I1202 11:54:40.129400   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 94/120
	I1202 11:54:41.131008   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 95/120
	I1202 11:54:42.132252   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 96/120
	I1202 11:54:43.133977   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 97/120
	I1202 11:54:44.135248   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 98/120
	I1202 11:54:45.136589   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 99/120
	I1202 11:54:46.137771   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 100/120
	I1202 11:54:47.139074   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 101/120
	I1202 11:54:48.140186   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 102/120
	I1202 11:54:49.141430   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 103/120
	I1202 11:54:50.142764   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 104/120
	I1202 11:54:51.144176   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 105/120
	I1202 11:54:52.145530   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 106/120
	I1202 11:54:53.147053   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 107/120
	I1202 11:54:54.148249   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 108/120
	I1202 11:54:55.149916   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 109/120
	I1202 11:54:56.151579   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 110/120
	I1202 11:54:57.152883   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 111/120
	I1202 11:54:58.154022   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 112/120
	I1202 11:54:59.155116   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 113/120
	I1202 11:55:00.157096   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 114/120
	I1202 11:55:01.158614   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 115/120
	I1202 11:55:02.159823   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 116/120
	I1202 11:55:03.161066   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 117/120
	I1202 11:55:04.162422   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 118/120
	I1202 11:55:05.163500   28615 main.go:141] libmachine: (ha-604935-m03) Waiting for machine to stop 119/120
	I1202 11:55:06.164318   28615 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1202 11:55:06.164387   28615 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1202 11:55:06.166043   28615 out.go:201] 
	W1202 11:55:06.167550   28615 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1202 11:55:06.167562   28615 out.go:270] * 
	* 
	W1202 11:55:06.170324   28615 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 11:55:06.171617   28615 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-604935 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-604935 --wait=true -v=7 --alsologtostderr
E1202 11:55:29.078877   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:57:49.238487   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:59:12.308468   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-604935 --wait=true -v=7 --alsologtostderr: (4m42.812061068s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-604935
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-604935 -n ha-604935
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-604935 logs -n 25: (2.0599483s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m02:/home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m02 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04:/home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m04 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp testdata/cp-test.txt                                                | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1292724683/001/cp-test_ha-604935-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935:/home/docker/cp-test_ha-604935-m04_ha-604935.txt                       |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935 sudo cat                                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935.txt                                 |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m02:/home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m02 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03:/home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m03 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-604935 node stop m02 -v=7                                                     | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-604935 node start m02 -v=7                                                    | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-604935 -v=7                                                           | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-604935 -v=7                                                                | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-604935 --wait=true -v=7                                                    | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:55 UTC | 02 Dec 24 11:59 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-604935                                                                | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:59 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:55:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:55:06.219136   29070 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:55:06.219238   29070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:55:06.219249   29070 out.go:358] Setting ErrFile to fd 2...
	I1202 11:55:06.219257   29070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:55:06.219443   29070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:55:06.219992   29070 out.go:352] Setting JSON to false
	I1202 11:55:06.220911   29070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2258,"bootTime":1733138248,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:55:06.220997   29070 start.go:139] virtualization: kvm guest
	I1202 11:55:06.222996   29070 out.go:177] * [ha-604935] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:55:06.224616   29070 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:55:06.224640   29070 notify.go:220] Checking for updates...
	I1202 11:55:06.226806   29070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:55:06.227924   29070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:55:06.229019   29070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:55:06.230060   29070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:55:06.231172   29070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:55:06.232677   29070 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:55:06.232779   29070 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:55:06.233226   29070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:55:06.233279   29070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:55:06.247780   29070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42303
	I1202 11:55:06.248180   29070 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:55:06.248726   29070 main.go:141] libmachine: Using API Version  1
	I1202 11:55:06.248746   29070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:55:06.249061   29070 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:55:06.249241   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:55:06.283217   29070 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 11:55:06.284221   29070 start.go:297] selected driver: kvm2
	I1202 11:55:06.284277   29070 start.go:901] validating driver "kvm2" against &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.26 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:55:06.284408   29070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:55:06.284702   29070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:55:06.284765   29070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 11:55:06.298841   29070 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 11:55:06.299495   29070 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:55:06.299529   29070 cni.go:84] Creating CNI manager for ""
	I1202 11:55:06.299576   29070 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1202 11:55:06.299622   29070 start.go:340] cluster config:
	{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.26 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:55:06.299739   29070 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:55:06.301858   29070 out.go:177] * Starting "ha-604935" primary control-plane node in "ha-604935" cluster
	I1202 11:55:06.302861   29070 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:55:06.302883   29070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:55:06.302890   29070 cache.go:56] Caching tarball of preloaded images
	I1202 11:55:06.302960   29070 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:55:06.302970   29070 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:55:06.303067   29070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:55:06.303286   29070 start.go:360] acquireMachinesLock for ha-604935: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:55:06.303321   29070 start.go:364] duration metric: took 19.583µs to acquireMachinesLock for "ha-604935"
	I1202 11:55:06.303333   29070 start.go:96] Skipping create...Using existing machine configuration
	I1202 11:55:06.303340   29070 fix.go:54] fixHost starting: 
	I1202 11:55:06.303580   29070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:55:06.303607   29070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:55:06.316481   29070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I1202 11:55:06.316836   29070 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:55:06.317281   29070 main.go:141] libmachine: Using API Version  1
	I1202 11:55:06.317310   29070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:55:06.317611   29070 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:55:06.317786   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:55:06.317937   29070 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:55:06.319429   29070 fix.go:112] recreateIfNeeded on ha-604935: state=Running err=<nil>
	W1202 11:55:06.319457   29070 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 11:55:06.320871   29070 out.go:177] * Updating the running kvm2 "ha-604935" VM ...
	I1202 11:55:06.321760   29070 machine.go:93] provisionDockerMachine start ...
	I1202 11:55:06.321779   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:55:06.321945   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:55:06.324070   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.324504   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.324530   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.324672   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:55:06.324823   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.324934   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.325049   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:55:06.325173   29070 main.go:141] libmachine: Using SSH client type: native
	I1202 11:55:06.325394   29070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:55:06.325408   29070 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 11:55:06.434334   29070 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935
	
	I1202 11:55:06.434362   29070 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:55:06.434585   29070 buildroot.go:166] provisioning hostname "ha-604935"
	I1202 11:55:06.434608   29070 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:55:06.434768   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:55:06.437236   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.437605   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.437633   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.437763   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:55:06.437917   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.438047   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.438157   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:55:06.438311   29070 main.go:141] libmachine: Using SSH client type: native
	I1202 11:55:06.438520   29070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:55:06.438532   29070 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935 && echo "ha-604935" | sudo tee /etc/hostname
	I1202 11:55:06.570070   29070 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935
	
	I1202 11:55:06.570091   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:55:06.572953   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.573292   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.573314   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.573525   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:55:06.573694   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.573824   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.573930   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:55:06.574097   29070 main.go:141] libmachine: Using SSH client type: native
	I1202 11:55:06.574281   29070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:55:06.574297   29070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:55:06.681272   29070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:55:06.681305   29070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:55:06.681335   29070 buildroot.go:174] setting up certificates
	I1202 11:55:06.681351   29070 provision.go:84] configureAuth start
	I1202 11:55:06.681363   29070 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:55:06.681591   29070 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:55:06.684171   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.684603   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.684629   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.684807   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:55:06.686777   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.687120   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.687136   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.687266   29070 provision.go:143] copyHostCerts
	I1202 11:55:06.687289   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:55:06.687355   29070 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:55:06.687368   29070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:55:06.687439   29070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:55:06.687530   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:55:06.687549   29070 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:55:06.687559   29070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:55:06.687585   29070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:55:06.687638   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:55:06.687653   29070 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:55:06.687659   29070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:55:06.687679   29070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:55:06.687734   29070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935 san=[127.0.0.1 192.168.39.102 ha-604935 localhost minikube]
	I1202 11:55:06.807689   29070 provision.go:177] copyRemoteCerts
	I1202 11:55:06.807747   29070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:55:06.807770   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:55:06.810142   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.810534   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.810552   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.810763   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:55:06.810927   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.811052   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:55:06.811217   29070 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:55:06.898601   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:55:06.898652   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:55:06.924578   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:55:06.924629   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1202 11:55:06.950424   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:55:06.950482   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 11:55:06.981387   29070 provision.go:87] duration metric: took 300.012113ms to configureAuth
	I1202 11:55:06.981409   29070 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:55:06.981583   29070 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:55:06.981642   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:55:06.984072   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.984470   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.984492   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.984658   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:55:06.984831   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.984980   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.985123   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:55:06.985267   29070 main.go:141] libmachine: Using SSH client type: native
	I1202 11:55:06.985449   29070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:55:06.985464   29070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:56:37.812523   29070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:56:37.812548   29070 machine.go:96] duration metric: took 1m31.490775603s to provisionDockerMachine
	I1202 11:56:37.812561   29070 start.go:293] postStartSetup for "ha-604935" (driver="kvm2")
	I1202 11:56:37.812574   29070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:56:37.812595   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:56:37.812952   29070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:56:37.813010   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:56:37.815880   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:37.816363   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:37.816390   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:37.816506   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:56:37.816674   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:56:37.816786   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:56:37.816898   29070 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:56:37.900595   29070 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:56:37.904587   29070 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:56:37.904617   29070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:56:37.904686   29070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:56:37.904776   29070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:56:37.904787   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:56:37.904894   29070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:56:37.914448   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:56:37.937359   29070 start.go:296] duration metric: took 124.788202ms for postStartSetup
	I1202 11:56:37.937410   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:56:37.937631   29070 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1202 11:56:37.937653   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:56:37.940129   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:37.940473   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:37.940492   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:37.940655   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:56:37.940812   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:56:37.940997   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:56:37.941124   29070 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	W1202 11:56:38.022216   29070 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1202 11:56:38.022242   29070 fix.go:56] duration metric: took 1m31.71890237s for fixHost
	I1202 11:56:38.022261   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:56:38.024669   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.025017   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:38.025042   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.025152   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:56:38.025327   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:56:38.025446   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:56:38.025576   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:56:38.025695   29070 main.go:141] libmachine: Using SSH client type: native
	I1202 11:56:38.025892   29070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:56:38.025909   29070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:56:38.128686   29070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733140598.095751801
	
	I1202 11:56:38.128705   29070 fix.go:216] guest clock: 1733140598.095751801
	I1202 11:56:38.128712   29070 fix.go:229] Guest: 2024-12-02 11:56:38.095751801 +0000 UTC Remote: 2024-12-02 11:56:38.022248956 +0000 UTC m=+91.840322887 (delta=73.502845ms)
	I1202 11:56:38.128743   29070 fix.go:200] guest clock delta is within tolerance: 73.502845ms
	I1202 11:56:38.128748   29070 start.go:83] releasing machines lock for "ha-604935", held for 1m31.825418703s
	I1202 11:56:38.128770   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:56:38.128960   29070 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:56:38.131380   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.131686   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:38.131710   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.131868   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:56:38.132305   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:56:38.132468   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:56:38.132574   29070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:56:38.132615   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:56:38.132661   29070 ssh_runner.go:195] Run: cat /version.json
	I1202 11:56:38.132684   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:56:38.135085   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.135197   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.135426   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:38.135439   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.135493   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:38.135515   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.135621   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:56:38.135761   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:56:38.135764   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:56:38.135920   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:56:38.135931   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:56:38.136062   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:56:38.136204   29070 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:56:38.136203   29070 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:56:38.213939   29070 ssh_runner.go:195] Run: systemctl --version
	I1202 11:56:38.235641   29070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:56:38.397389   29070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:56:38.403912   29070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:56:38.403980   29070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:56:38.413500   29070 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 11:56:38.413519   29070 start.go:495] detecting cgroup driver to use...
	I1202 11:56:38.413583   29070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:56:38.430173   29070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:56:38.442927   29070 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:56:38.442973   29070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:56:38.456480   29070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:56:38.469845   29070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:56:38.621070   29070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:56:38.769785   29070 docker.go:233] disabling docker service ...
	I1202 11:56:38.769859   29070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:56:38.785610   29070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:56:38.798752   29070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:56:38.939837   29070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:56:39.083386   29070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:56:39.096926   29070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:56:39.116960   29070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:56:39.117008   29070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.127903   29070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:56:39.127960   29070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.137880   29070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.148019   29070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.157734   29070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:56:39.167753   29070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.177635   29070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.188482   29070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.198346   29070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:56:39.207397   29070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:56:39.216211   29070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:56:39.360529   29070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:56:39.577511   29070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:56:39.577594   29070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:56:39.582537   29070 start.go:563] Will wait 60s for crictl version
	I1202 11:56:39.582576   29070 ssh_runner.go:195] Run: which crictl
	I1202 11:56:39.586300   29070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:56:39.625693   29070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:56:39.625755   29070 ssh_runner.go:195] Run: crio --version
	I1202 11:56:39.656300   29070 ssh_runner.go:195] Run: crio --version
	I1202 11:56:39.689476   29070 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:56:39.690618   29070 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:56:39.693036   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:39.693391   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:39.693418   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:39.693573   29070 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:56:39.698938   29070 kubeadm.go:883] updating cluster {Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.26 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 11:56:39.699095   29070 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:56:39.699144   29070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:56:39.743529   29070 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:56:39.743547   29070 crio.go:433] Images already preloaded, skipping extraction
	I1202 11:56:39.743593   29070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:56:39.785049   29070 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:56:39.785070   29070 cache_images.go:84] Images are preloaded, skipping loading
	I1202 11:56:39.785082   29070 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.2 crio true true} ...
	I1202 11:56:39.785202   29070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:56:39.785295   29070 ssh_runner.go:195] Run: crio config
	I1202 11:56:39.831296   29070 cni.go:84] Creating CNI manager for ""
	I1202 11:56:39.831321   29070 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1202 11:56:39.831329   29070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 11:56:39.831352   29070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-604935 NodeName:ha-604935 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 11:56:39.831476   29070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-604935"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.102"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 11:56:39.831501   29070 kube-vip.go:115] generating kube-vip config ...
	I1202 11:56:39.831535   29070 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:56:39.842999   29070 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:56:39.843077   29070 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:56:39.843129   29070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:56:39.852708   29070 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:56:39.852757   29070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 11:56:39.862530   29070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1202 11:56:39.878860   29070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:56:39.894902   29070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1202 11:56:39.911502   29070 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1202 11:56:39.928363   29070 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:56:39.932209   29070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:56:40.082585   29070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:56:40.097311   29070 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.102
	I1202 11:56:40.097334   29070 certs.go:194] generating shared ca certs ...
	I1202 11:56:40.097358   29070 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:56:40.097533   29070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:56:40.097588   29070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:56:40.097600   29070 certs.go:256] generating profile certs ...
	I1202 11:56:40.097715   29070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:56:40.097750   29070 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.b88be0b6
	I1202 11:56:40.097773   29070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.b88be0b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.96 192.168.39.211 192.168.39.254]
	I1202 11:56:40.200906   29070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.b88be0b6 ...
	I1202 11:56:40.200930   29070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.b88be0b6: {Name:mk274378b8eeaa2d4c7f254ef06067385efc1c86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:56:40.201085   29070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.b88be0b6 ...
	I1202 11:56:40.201097   29070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.b88be0b6: {Name:mk27ba45179ae74e80bab83f9972480549838159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:56:40.201172   29070 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.b88be0b6 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:56:40.201316   29070 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.b88be0b6 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:56:40.201436   29070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:56:40.201449   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:56:40.201461   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:56:40.201472   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:56:40.201485   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:56:40.201504   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:56:40.201526   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:56:40.201538   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:56:40.201550   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:56:40.201602   29070 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:56:40.201629   29070 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:56:40.201638   29070 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:56:40.201662   29070 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:56:40.201682   29070 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:56:40.201702   29070 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:56:40.201737   29070 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:56:40.201765   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:56:40.201778   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:56:40.201790   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:56:40.202405   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:56:40.227815   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:56:40.250774   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:56:40.273686   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:56:40.296613   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 11:56:40.321180   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 11:56:40.344334   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:56:40.368152   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:56:40.392197   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:56:40.415232   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:56:40.438447   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:56:40.461495   29070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 11:56:40.477774   29070 ssh_runner.go:195] Run: openssl version
	I1202 11:56:40.483474   29070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:56:40.494031   29070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:56:40.498469   29070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:56:40.498501   29070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:56:40.504002   29070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:56:40.513554   29070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:56:40.524066   29070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:56:40.528498   29070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:56:40.528530   29070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:56:40.533960   29070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:56:40.543053   29070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:56:40.553840   29070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:56:40.558248   29070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:56:40.558286   29070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:56:40.563776   29070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:56:40.573468   29070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:56:40.578185   29070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 11:56:40.583933   29070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 11:56:40.589698   29070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 11:56:40.595411   29070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 11:56:40.601259   29070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 11:56:40.606655   29070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 11:56:40.612613   29070 kubeadm.go:392] StartCluster: {Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.26 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:56:40.612712   29070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 11:56:40.612748   29070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 11:56:40.649415   29070 cri.go:89] found id: "9df2493973846af6f5112fe1a8d1dd836241adf2e410b1405762f6882cc165ec"
	I1202 11:56:40.649432   29070 cri.go:89] found id: "14a4bada7dd6feddee0d1b15091ae3ae75d3218e67bd43e36e4bbc098c896846"
	I1202 11:56:40.649436   29070 cri.go:89] found id: "467b3b1b152ecef6e0aa5ac1c04967ea674b1d66123561a5cea42567fc66cdbb"
	I1202 11:56:40.649439   29070 cri.go:89] found id: "be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818"
	I1202 11:56:40.649441   29070 cri.go:89] found id: "91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f"
	I1202 11:56:40.649444   29070 cri.go:89] found id: "9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b"
	I1202 11:56:40.649446   29070 cri.go:89] found id: "579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10"
	I1202 11:56:40.649449   29070 cri.go:89] found id: "f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73"
	I1202 11:56:40.649451   29070 cri.go:89] found id: "17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7"
	I1202 11:56:40.649457   29070 cri.go:89] found id: "275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41"
	I1202 11:56:40.649467   29070 cri.go:89] found id: "090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35"
	I1202 11:56:40.649472   29070 cri.go:89] found id: "53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46"
	I1202 11:56:40.649479   29070 cri.go:89] found id: "9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6"
	I1202 11:56:40.649483   29070 cri.go:89] found id: ""
	I1202 11:56:40.649521   29070 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-604935 -n ha-604935
helpers_test.go:261: (dbg) Run:  kubectl --context ha-604935 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (407.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-604935 stop -v=7 --alsologtostderr: exit status 82 (2m0.455110794s)

                                                
                                                
-- stdout --
	* Stopping node "ha-604935-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 12:00:08.798003   30980 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:00:08.798100   30980 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:00:08.798104   30980 out.go:358] Setting ErrFile to fd 2...
	I1202 12:00:08.798108   30980 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:00:08.798254   30980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:00:08.798460   30980 out.go:352] Setting JSON to false
	I1202 12:00:08.798523   30980 mustload.go:65] Loading cluster: ha-604935
	I1202 12:00:08.798914   30980 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:00:08.798991   30980 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 12:00:08.799195   30980 mustload.go:65] Loading cluster: ha-604935
	I1202 12:00:08.799316   30980 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:00:08.799336   30980 stop.go:39] StopHost: ha-604935-m04
	I1202 12:00:08.799690   30980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:00:08.799732   30980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:00:08.815271   30980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I1202 12:00:08.815723   30980 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:00:08.816223   30980 main.go:141] libmachine: Using API Version  1
	I1202 12:00:08.816255   30980 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:00:08.816578   30980 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:00:08.818596   30980 out.go:177] * Stopping node "ha-604935-m04"  ...
	I1202 12:00:08.819913   30980 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1202 12:00:08.819947   30980 main.go:141] libmachine: (ha-604935-m04) Calling .DriverName
	I1202 12:00:08.820124   30980 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1202 12:00:08.820153   30980 main.go:141] libmachine: (ha-604935-m04) Calling .GetSSHHostname
	I1202 12:00:08.822642   30980 main.go:141] libmachine: (ha-604935-m04) DBG | domain ha-604935-m04 has defined MAC address 52:54:00:0e:58:78 in network mk-ha-604935
	I1202 12:00:08.823003   30980 main.go:141] libmachine: (ha-604935-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:58:78", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:59:37 +0000 UTC Type:0 Mac:52:54:00:0e:58:78 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-604935-m04 Clientid:01:52:54:00:0e:58:78}
	I1202 12:00:08.823035   30980 main.go:141] libmachine: (ha-604935-m04) DBG | domain ha-604935-m04 has defined IP address 192.168.39.26 and MAC address 52:54:00:0e:58:78 in network mk-ha-604935
	I1202 12:00:08.823212   30980 main.go:141] libmachine: (ha-604935-m04) Calling .GetSSHPort
	I1202 12:00:08.823368   30980 main.go:141] libmachine: (ha-604935-m04) Calling .GetSSHKeyPath
	I1202 12:00:08.823502   30980 main.go:141] libmachine: (ha-604935-m04) Calling .GetSSHUsername
	I1202 12:00:08.823631   30980 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935-m04/id_rsa Username:docker}
	I1202 12:00:08.910900   30980 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1202 12:00:08.964226   30980 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1202 12:00:09.016426   30980 main.go:141] libmachine: Stopping "ha-604935-m04"...
	I1202 12:00:09.016461   30980 main.go:141] libmachine: (ha-604935-m04) Calling .GetState
	I1202 12:00:09.017940   30980 main.go:141] libmachine: (ha-604935-m04) Calling .Stop
	I1202 12:00:09.021184   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 0/120
	I1202 12:00:10.022562   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 1/120
	I1202 12:00:11.023893   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 2/120
	I1202 12:00:12.025206   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 3/120
	I1202 12:00:13.026506   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 4/120
	I1202 12:00:14.028349   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 5/120
	I1202 12:00:15.029665   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 6/120
	I1202 12:00:16.031323   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 7/120
	I1202 12:00:17.033052   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 8/120
	I1202 12:00:18.034586   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 9/120
	I1202 12:00:19.036680   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 10/120
	I1202 12:00:20.038547   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 11/120
	I1202 12:00:21.039875   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 12/120
	I1202 12:00:22.041509   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 13/120
	I1202 12:00:23.042651   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 14/120
	I1202 12:00:24.044681   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 15/120
	I1202 12:00:25.045939   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 16/120
	I1202 12:00:26.047110   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 17/120
	I1202 12:00:27.048739   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 18/120
	I1202 12:00:28.049892   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 19/120
	I1202 12:00:29.051718   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 20/120
	I1202 12:00:30.052970   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 21/120
	I1202 12:00:31.054117   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 22/120
	I1202 12:00:32.055267   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 23/120
	I1202 12:00:33.056593   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 24/120
	I1202 12:00:34.058321   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 25/120
	I1202 12:00:35.059552   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 26/120
	I1202 12:00:36.060778   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 27/120
	I1202 12:00:37.061885   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 28/120
	I1202 12:00:38.063144   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 29/120
	I1202 12:00:39.064991   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 30/120
	I1202 12:00:40.066816   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 31/120
	I1202 12:00:41.068940   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 32/120
	I1202 12:00:42.070697   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 33/120
	I1202 12:00:43.071976   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 34/120
	I1202 12:00:44.073670   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 35/120
	I1202 12:00:45.075102   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 36/120
	I1202 12:00:46.076467   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 37/120
	I1202 12:00:47.078818   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 38/120
	I1202 12:00:48.080003   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 39/120
	I1202 12:00:49.082153   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 40/120
	I1202 12:00:50.083337   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 41/120
	I1202 12:00:51.084736   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 42/120
	I1202 12:00:52.086298   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 43/120
	I1202 12:00:53.087423   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 44/120
	I1202 12:00:54.089204   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 45/120
	I1202 12:00:55.090278   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 46/120
	I1202 12:00:56.091643   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 47/120
	I1202 12:00:57.093078   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 48/120
	I1202 12:00:58.094629   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 49/120
	I1202 12:00:59.096132   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 50/120
	I1202 12:01:00.097639   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 51/120
	I1202 12:01:01.099051   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 52/120
	I1202 12:01:02.100510   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 53/120
	I1202 12:01:03.102520   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 54/120
	I1202 12:01:04.104143   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 55/120
	I1202 12:01:05.105459   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 56/120
	I1202 12:01:06.106611   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 57/120
	I1202 12:01:07.107975   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 58/120
	I1202 12:01:08.110001   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 59/120
	I1202 12:01:09.112136   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 60/120
	I1202 12:01:10.113627   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 61/120
	I1202 12:01:11.115040   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 62/120
	I1202 12:01:12.116542   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 63/120
	I1202 12:01:13.118587   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 64/120
	I1202 12:01:14.120527   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 65/120
	I1202 12:01:15.122991   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 66/120
	I1202 12:01:16.124185   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 67/120
	I1202 12:01:17.125496   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 68/120
	I1202 12:01:18.126666   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 69/120
	I1202 12:01:19.128622   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 70/120
	I1202 12:01:20.130026   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 71/120
	I1202 12:01:21.131264   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 72/120
	I1202 12:01:22.132733   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 73/120
	I1202 12:01:23.133870   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 74/120
	I1202 12:01:24.135582   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 75/120
	I1202 12:01:25.136720   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 76/120
	I1202 12:01:26.137925   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 77/120
	I1202 12:01:27.139028   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 78/120
	I1202 12:01:28.140297   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 79/120
	I1202 12:01:29.141980   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 80/120
	I1202 12:01:30.143595   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 81/120
	I1202 12:01:31.144922   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 82/120
	I1202 12:01:32.146198   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 83/120
	I1202 12:01:33.147839   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 84/120
	I1202 12:01:34.149494   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 85/120
	I1202 12:01:35.150723   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 86/120
	I1202 12:01:36.152009   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 87/120
	I1202 12:01:37.153337   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 88/120
	I1202 12:01:38.154889   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 89/120
	I1202 12:01:39.156707   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 90/120
	I1202 12:01:40.158726   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 91/120
	I1202 12:01:41.160002   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 92/120
	I1202 12:01:42.161202   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 93/120
	I1202 12:01:43.163293   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 94/120
	I1202 12:01:44.165145   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 95/120
	I1202 12:01:45.166383   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 96/120
	I1202 12:01:46.167816   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 97/120
	I1202 12:01:47.169130   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 98/120
	I1202 12:01:48.170739   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 99/120
	I1202 12:01:49.172856   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 100/120
	I1202 12:01:50.174739   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 101/120
	I1202 12:01:51.176046   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 102/120
	I1202 12:01:52.177658   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 103/120
	I1202 12:01:53.179031   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 104/120
	I1202 12:01:54.180852   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 105/120
	I1202 12:01:55.182214   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 106/120
	I1202 12:01:56.183505   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 107/120
	I1202 12:01:57.184741   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 108/120
	I1202 12:01:58.186752   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 109/120
	I1202 12:01:59.188689   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 110/120
	I1202 12:02:00.190663   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 111/120
	I1202 12:02:01.191901   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 112/120
	I1202 12:02:02.193210   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 113/120
	I1202 12:02:03.194574   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 114/120
	I1202 12:02:04.196270   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 115/120
	I1202 12:02:05.197423   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 116/120
	I1202 12:02:06.198632   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 117/120
	I1202 12:02:07.199829   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 118/120
	I1202 12:02:08.201217   30980 main.go:141] libmachine: (ha-604935-m04) Waiting for machine to stop 119/120
	I1202 12:02:09.202628   30980 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1202 12:02:09.202691   30980 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1202 12:02:09.204264   30980 out.go:201] 
	W1202 12:02:09.205518   30980 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1202 12:02:09.205540   30980 out.go:270] * 
	* 
	W1202 12:02:09.208160   30980 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 12:02:09.209126   30980 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-604935 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr: (18.873899035s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-604935 -n ha-604935
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-604935 logs -n 25: (1.985458103s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-604935 ssh -n ha-604935-m02 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04:/home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m04 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp testdata/cp-test.txt                                                | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1292724683/001/cp-test_ha-604935-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935:/home/docker/cp-test_ha-604935-m04_ha-604935.txt                       |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935 sudo cat                                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935.txt                                 |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m02:/home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m02 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m03:/home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n                                                                 | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | ha-604935-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-604935 ssh -n ha-604935-m03 sudo cat                                          | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC | 02 Dec 24 11:50 UTC |
	|         | /home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-604935 node stop m02 -v=7                                                     | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-604935 node start m02 -v=7                                                    | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-604935 -v=7                                                           | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-604935 -v=7                                                                | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:53 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-604935 --wait=true -v=7                                                    | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:55 UTC | 02 Dec 24 11:59 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-604935                                                                | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:59 UTC |                     |
	| node    | ha-604935 node delete m03 -v=7                                                   | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 11:59 UTC | 02 Dec 24 12:00 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-604935 stop -v=7                                                              | ha-604935 | jenkins | v1.34.0 | 02 Dec 24 12:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:55:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:55:06.219136   29070 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:55:06.219238   29070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:55:06.219249   29070 out.go:358] Setting ErrFile to fd 2...
	I1202 11:55:06.219257   29070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:55:06.219443   29070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:55:06.219992   29070 out.go:352] Setting JSON to false
	I1202 11:55:06.220911   29070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2258,"bootTime":1733138248,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:55:06.220997   29070 start.go:139] virtualization: kvm guest
	I1202 11:55:06.222996   29070 out.go:177] * [ha-604935] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:55:06.224616   29070 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:55:06.224640   29070 notify.go:220] Checking for updates...
	I1202 11:55:06.226806   29070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:55:06.227924   29070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:55:06.229019   29070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:55:06.230060   29070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:55:06.231172   29070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:55:06.232677   29070 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:55:06.232779   29070 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:55:06.233226   29070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:55:06.233279   29070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:55:06.247780   29070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42303
	I1202 11:55:06.248180   29070 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:55:06.248726   29070 main.go:141] libmachine: Using API Version  1
	I1202 11:55:06.248746   29070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:55:06.249061   29070 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:55:06.249241   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:55:06.283217   29070 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 11:55:06.284221   29070 start.go:297] selected driver: kvm2
	I1202 11:55:06.284277   29070 start.go:901] validating driver "kvm2" against &{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.26 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:55:06.284408   29070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:55:06.284702   29070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:55:06.284765   29070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 11:55:06.298841   29070 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 11:55:06.299495   29070 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:55:06.299529   29070 cni.go:84] Creating CNI manager for ""
	I1202 11:55:06.299576   29070 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1202 11:55:06.299622   29070 start.go:340] cluster config:
	{Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.26 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:55:06.299739   29070 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:55:06.301858   29070 out.go:177] * Starting "ha-604935" primary control-plane node in "ha-604935" cluster
	I1202 11:55:06.302861   29070 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:55:06.302883   29070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:55:06.302890   29070 cache.go:56] Caching tarball of preloaded images
	I1202 11:55:06.302960   29070 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:55:06.302970   29070 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:55:06.303067   29070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/config.json ...
	I1202 11:55:06.303286   29070 start.go:360] acquireMachinesLock for ha-604935: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 11:55:06.303321   29070 start.go:364] duration metric: took 19.583µs to acquireMachinesLock for "ha-604935"
	I1202 11:55:06.303333   29070 start.go:96] Skipping create...Using existing machine configuration
	I1202 11:55:06.303340   29070 fix.go:54] fixHost starting: 
	I1202 11:55:06.303580   29070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:55:06.303607   29070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:55:06.316481   29070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I1202 11:55:06.316836   29070 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:55:06.317281   29070 main.go:141] libmachine: Using API Version  1
	I1202 11:55:06.317310   29070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:55:06.317611   29070 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:55:06.317786   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:55:06.317937   29070 main.go:141] libmachine: (ha-604935) Calling .GetState
	I1202 11:55:06.319429   29070 fix.go:112] recreateIfNeeded on ha-604935: state=Running err=<nil>
	W1202 11:55:06.319457   29070 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 11:55:06.320871   29070 out.go:177] * Updating the running kvm2 "ha-604935" VM ...
	I1202 11:55:06.321760   29070 machine.go:93] provisionDockerMachine start ...
	I1202 11:55:06.321779   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:55:06.321945   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:55:06.324070   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.324504   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.324530   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.324672   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:55:06.324823   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.324934   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.325049   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:55:06.325173   29070 main.go:141] libmachine: Using SSH client type: native
	I1202 11:55:06.325394   29070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:55:06.325408   29070 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 11:55:06.434334   29070 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935
	
	I1202 11:55:06.434362   29070 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:55:06.434585   29070 buildroot.go:166] provisioning hostname "ha-604935"
	I1202 11:55:06.434608   29070 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:55:06.434768   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:55:06.437236   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.437605   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.437633   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.437763   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:55:06.437917   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.438047   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.438157   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:55:06.438311   29070 main.go:141] libmachine: Using SSH client type: native
	I1202 11:55:06.438520   29070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:55:06.438532   29070 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-604935 && echo "ha-604935" | sudo tee /etc/hostname
	I1202 11:55:06.570070   29070 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-604935
	
	I1202 11:55:06.570091   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:55:06.572953   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.573292   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.573314   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.573525   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:55:06.573694   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.573824   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.573930   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:55:06.574097   29070 main.go:141] libmachine: Using SSH client type: native
	I1202 11:55:06.574281   29070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:55:06.574297   29070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-604935' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-604935/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-604935' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:55:06.681272   29070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:55:06.681305   29070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 11:55:06.681335   29070 buildroot.go:174] setting up certificates
	I1202 11:55:06.681351   29070 provision.go:84] configureAuth start
	I1202 11:55:06.681363   29070 main.go:141] libmachine: (ha-604935) Calling .GetMachineName
	I1202 11:55:06.681591   29070 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:55:06.684171   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.684603   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.684629   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.684807   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:55:06.686777   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.687120   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.687136   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.687266   29070 provision.go:143] copyHostCerts
	I1202 11:55:06.687289   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:55:06.687355   29070 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 11:55:06.687368   29070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 11:55:06.687439   29070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 11:55:06.687530   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:55:06.687549   29070 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 11:55:06.687559   29070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 11:55:06.687585   29070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 11:55:06.687638   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:55:06.687653   29070 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 11:55:06.687659   29070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 11:55:06.687679   29070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 11:55:06.687734   29070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.ha-604935 san=[127.0.0.1 192.168.39.102 ha-604935 localhost minikube]
	I1202 11:55:06.807689   29070 provision.go:177] copyRemoteCerts
	I1202 11:55:06.807747   29070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:55:06.807770   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:55:06.810142   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.810534   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.810552   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.810763   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:55:06.810927   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.811052   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:55:06.811217   29070 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:55:06.898601   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:55:06.898652   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 11:55:06.924578   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:55:06.924629   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1202 11:55:06.950424   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:55:06.950482   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 11:55:06.981387   29070 provision.go:87] duration metric: took 300.012113ms to configureAuth
	I1202 11:55:06.981409   29070 buildroot.go:189] setting minikube options for container-runtime
	I1202 11:55:06.981583   29070 config.go:182] Loaded profile config "ha-604935": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:55:06.981642   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:55:06.984072   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.984470   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:55:06.984492   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:55:06.984658   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:55:06.984831   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.984980   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:55:06.985123   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:55:06.985267   29070 main.go:141] libmachine: Using SSH client type: native
	I1202 11:55:06.985449   29070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:55:06.985464   29070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:56:37.812523   29070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:56:37.812548   29070 machine.go:96] duration metric: took 1m31.490775603s to provisionDockerMachine
	I1202 11:56:37.812561   29070 start.go:293] postStartSetup for "ha-604935" (driver="kvm2")
	I1202 11:56:37.812574   29070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:56:37.812595   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:56:37.812952   29070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:56:37.813010   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:56:37.815880   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:37.816363   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:37.816390   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:37.816506   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:56:37.816674   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:56:37.816786   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:56:37.816898   29070 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:56:37.900595   29070 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:56:37.904587   29070 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 11:56:37.904617   29070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 11:56:37.904686   29070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 11:56:37.904776   29070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 11:56:37.904787   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 11:56:37.904894   29070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:56:37.914448   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:56:37.937359   29070 start.go:296] duration metric: took 124.788202ms for postStartSetup
	I1202 11:56:37.937410   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:56:37.937631   29070 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1202 11:56:37.937653   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:56:37.940129   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:37.940473   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:37.940492   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:37.940655   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:56:37.940812   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:56:37.940997   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:56:37.941124   29070 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	W1202 11:56:38.022216   29070 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1202 11:56:38.022242   29070 fix.go:56] duration metric: took 1m31.71890237s for fixHost
	I1202 11:56:38.022261   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:56:38.024669   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.025017   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:38.025042   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.025152   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:56:38.025327   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:56:38.025446   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:56:38.025576   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:56:38.025695   29070 main.go:141] libmachine: Using SSH client type: native
	I1202 11:56:38.025892   29070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I1202 11:56:38.025909   29070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 11:56:38.128686   29070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733140598.095751801
	
	I1202 11:56:38.128705   29070 fix.go:216] guest clock: 1733140598.095751801
	I1202 11:56:38.128712   29070 fix.go:229] Guest: 2024-12-02 11:56:38.095751801 +0000 UTC Remote: 2024-12-02 11:56:38.022248956 +0000 UTC m=+91.840322887 (delta=73.502845ms)
	I1202 11:56:38.128743   29070 fix.go:200] guest clock delta is within tolerance: 73.502845ms
	I1202 11:56:38.128748   29070 start.go:83] releasing machines lock for "ha-604935", held for 1m31.825418703s
	I1202 11:56:38.128770   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:56:38.128960   29070 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:56:38.131380   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.131686   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:38.131710   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.131868   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:56:38.132305   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:56:38.132468   29070 main.go:141] libmachine: (ha-604935) Calling .DriverName
	I1202 11:56:38.132574   29070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:56:38.132615   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:56:38.132661   29070 ssh_runner.go:195] Run: cat /version.json
	I1202 11:56:38.132684   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHHostname
	I1202 11:56:38.135085   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.135197   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.135426   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:38.135439   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.135493   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:38.135515   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:38.135621   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:56:38.135761   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHPort
	I1202 11:56:38.135764   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:56:38.135920   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHKeyPath
	I1202 11:56:38.135931   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:56:38.136062   29070 main.go:141] libmachine: (ha-604935) Calling .GetSSHUsername
	I1202 11:56:38.136204   29070 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:56:38.136203   29070 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/ha-604935/id_rsa Username:docker}
	I1202 11:56:38.213939   29070 ssh_runner.go:195] Run: systemctl --version
	I1202 11:56:38.235641   29070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:56:38.397389   29070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 11:56:38.403912   29070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 11:56:38.403980   29070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:56:38.413500   29070 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 11:56:38.413519   29070 start.go:495] detecting cgroup driver to use...
	I1202 11:56:38.413583   29070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:56:38.430173   29070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:56:38.442927   29070 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:56:38.442973   29070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:56:38.456480   29070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:56:38.469845   29070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:56:38.621070   29070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:56:38.769785   29070 docker.go:233] disabling docker service ...
	I1202 11:56:38.769859   29070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:56:38.785610   29070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:56:38.798752   29070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:56:38.939837   29070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:56:39.083386   29070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:56:39.096926   29070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:56:39.116960   29070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:56:39.117008   29070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.127903   29070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:56:39.127960   29070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.137880   29070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.148019   29070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.157734   29070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:56:39.167753   29070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.177635   29070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.188482   29070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:56:39.198346   29070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:56:39.207397   29070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:56:39.216211   29070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:56:39.360529   29070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:56:39.577511   29070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:56:39.577594   29070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:56:39.582537   29070 start.go:563] Will wait 60s for crictl version
	I1202 11:56:39.582576   29070 ssh_runner.go:195] Run: which crictl
	I1202 11:56:39.586300   29070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:56:39.625693   29070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 11:56:39.625755   29070 ssh_runner.go:195] Run: crio --version
	I1202 11:56:39.656300   29070 ssh_runner.go:195] Run: crio --version
	I1202 11:56:39.689476   29070 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 11:56:39.690618   29070 main.go:141] libmachine: (ha-604935) Calling .GetIP
	I1202 11:56:39.693036   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:39.693391   29070 main.go:141] libmachine: (ha-604935) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:fa:7c", ip: ""} in network mk-ha-604935: {Iface:virbr1 ExpiryTime:2024-12-02 12:46:06 +0000 UTC Type:0 Mac:52:54:00:e0:fa:7c Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-604935 Clientid:01:52:54:00:e0:fa:7c}
	I1202 11:56:39.693418   29070 main.go:141] libmachine: (ha-604935) DBG | domain ha-604935 has defined IP address 192.168.39.102 and MAC address 52:54:00:e0:fa:7c in network mk-ha-604935
	I1202 11:56:39.693573   29070 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 11:56:39.698938   29070 kubeadm.go:883] updating cluster {Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.26 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 11:56:39.699095   29070 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:56:39.699144   29070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:56:39.743529   29070 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:56:39.743547   29070 crio.go:433] Images already preloaded, skipping extraction
	I1202 11:56:39.743593   29070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:56:39.785049   29070 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:56:39.785070   29070 cache_images.go:84] Images are preloaded, skipping loading
	I1202 11:56:39.785082   29070 kubeadm.go:934] updating node { 192.168.39.102 8443 v1.31.2 crio true true} ...
	I1202 11:56:39.785202   29070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-604935 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:56:39.785295   29070 ssh_runner.go:195] Run: crio config
	I1202 11:56:39.831296   29070 cni.go:84] Creating CNI manager for ""
	I1202 11:56:39.831321   29070 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1202 11:56:39.831329   29070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 11:56:39.831352   29070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.102 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-604935 NodeName:ha-604935 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 11:56:39.831476   29070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-604935"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.102"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.102"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 11:56:39.831501   29070 kube-vip.go:115] generating kube-vip config ...
	I1202 11:56:39.831535   29070 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1202 11:56:39.842999   29070 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1202 11:56:39.843077   29070 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:56:39.843129   29070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:56:39.852708   29070 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:56:39.852757   29070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 11:56:39.862530   29070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1202 11:56:39.878860   29070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:56:39.894902   29070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1202 11:56:39.911502   29070 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1202 11:56:39.928363   29070 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:56:39.932209   29070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:56:40.082585   29070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:56:40.097311   29070 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935 for IP: 192.168.39.102
	I1202 11:56:40.097334   29070 certs.go:194] generating shared ca certs ...
	I1202 11:56:40.097358   29070 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:56:40.097533   29070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 11:56:40.097588   29070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 11:56:40.097600   29070 certs.go:256] generating profile certs ...
	I1202 11:56:40.097715   29070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/client.key
	I1202 11:56:40.097750   29070 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.b88be0b6
	I1202 11:56:40.097773   29070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.b88be0b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.102 192.168.39.96 192.168.39.211 192.168.39.254]
	I1202 11:56:40.200906   29070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.b88be0b6 ...
	I1202 11:56:40.200930   29070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.b88be0b6: {Name:mk274378b8eeaa2d4c7f254ef06067385efc1c86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:56:40.201085   29070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.b88be0b6 ...
	I1202 11:56:40.201097   29070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.b88be0b6: {Name:mk27ba45179ae74e80bab83f9972480549838159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:56:40.201172   29070 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt.b88be0b6 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt
	I1202 11:56:40.201316   29070 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key.b88be0b6 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key
	I1202 11:56:40.201436   29070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key
	I1202 11:56:40.201449   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:56:40.201461   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:56:40.201472   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:56:40.201485   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:56:40.201504   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:56:40.201526   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:56:40.201538   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:56:40.201550   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:56:40.201602   29070 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 11:56:40.201629   29070 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 11:56:40.201638   29070 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:56:40.201662   29070 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 11:56:40.201682   29070 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:56:40.201702   29070 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 11:56:40.201737   29070 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 11:56:40.201765   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 11:56:40.201778   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:56:40.201790   29070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 11:56:40.202405   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:56:40.227815   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:56:40.250774   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:56:40.273686   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 11:56:40.296613   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 11:56:40.321180   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 11:56:40.344334   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:56:40.368152   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/ha-604935/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:56:40.392197   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 11:56:40.415232   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:56:40.438447   29070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 11:56:40.461495   29070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 11:56:40.477774   29070 ssh_runner.go:195] Run: openssl version
	I1202 11:56:40.483474   29070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 11:56:40.494031   29070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 11:56:40.498469   29070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 11:56:40.498501   29070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 11:56:40.504002   29070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:56:40.513554   29070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:56:40.524066   29070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:56:40.528498   29070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:56:40.528530   29070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:56:40.533960   29070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:56:40.543053   29070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 11:56:40.553840   29070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 11:56:40.558248   29070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 11:56:40.558286   29070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 11:56:40.563776   29070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 11:56:40.573468   29070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:56:40.578185   29070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 11:56:40.583933   29070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 11:56:40.589698   29070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 11:56:40.595411   29070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 11:56:40.601259   29070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 11:56:40.606655   29070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 11:56:40.612613   29070 kubeadm.go:392] StartCluster: {Name:ha-604935 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-604935 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.96 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.26 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:56:40.612712   29070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 11:56:40.612748   29070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 11:56:40.649415   29070 cri.go:89] found id: "9df2493973846af6f5112fe1a8d1dd836241adf2e410b1405762f6882cc165ec"
	I1202 11:56:40.649432   29070 cri.go:89] found id: "14a4bada7dd6feddee0d1b15091ae3ae75d3218e67bd43e36e4bbc098c896846"
	I1202 11:56:40.649436   29070 cri.go:89] found id: "467b3b1b152ecef6e0aa5ac1c04967ea674b1d66123561a5cea42567fc66cdbb"
	I1202 11:56:40.649439   29070 cri.go:89] found id: "be0c4adffd61baf5e667eef6b53a2606d1c2ffbb64a57153692043101d0cb818"
	I1202 11:56:40.649441   29070 cri.go:89] found id: "91c90e9d05cf7465a799654e9e71bb72a81897cbb382447b16cfb646ea0b205f"
	I1202 11:56:40.649444   29070 cri.go:89] found id: "9d7d77b59569bdf6c0d1dc1e7af63676f0af02f6a4027bad75b600aab2f2532b"
	I1202 11:56:40.649446   29070 cri.go:89] found id: "579b11920d9fd8fab566447565f8645262844f6a246fb52089ed791dd2409e10"
	I1202 11:56:40.649449   29070 cri.go:89] found id: "f6a700874f779b57b290234169b9fffbcf78622f4764778f27105f2a65d86b73"
	I1202 11:56:40.649451   29070 cri.go:89] found id: "17bfa0393f18795be72106a5c350d117fa20babfbbb9d3170fff23c164e3cdb7"
	I1202 11:56:40.649457   29070 cri.go:89] found id: "275d716cfd4f70dcd51088f6711b8465f6421ac2ac85d783df0eccc7647bef41"
	I1202 11:56:40.649467   29070 cri.go:89] found id: "090e4a0254277ab9331908f13af9e9f4e1fa58bc73af68307d4d34b2724e0f35"
	I1202 11:56:40.649472   29070 cri.go:89] found id: "53184ed95349ac0b915216ce50103bd9bae4bd9a38d0011f8bf07171e90d0e46"
	I1202 11:56:40.649479   29070 cri.go:89] found id: "9624bba327f9b2e412afdd92132bb63a26ce739f41d14007a6540e399973dbb6"
	I1202 11:56:40.649483   29070 cri.go:89] found id: ""
	I1202 11:56:40.649521   29070 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-604935 -n ha-604935
helpers_test.go:261: (dbg) Run:  kubectl --context ha-604935 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-191330
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-191330
E1202 12:17:49.244558   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-191330: exit status 82 (2m1.872755178s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-191330-m03"  ...
	* Stopping node "multinode-191330-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-191330" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-191330 --wait=true -v=8 --alsologtostderr
E1202 12:20:01.370279   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-191330 --wait=true -v=8 --alsologtostderr: (3m19.103871119s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-191330
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-191330 -n multinode-191330
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-191330 logs -n 25: (2.033228845s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp multinode-191330-m02:/home/docker/cp-test.txt                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2126939927/001/cp-test_multinode-191330-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp multinode-191330-m02:/home/docker/cp-test.txt                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330:/home/docker/cp-test_multinode-191330-m02_multinode-191330.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n multinode-191330 sudo cat                                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /home/docker/cp-test_multinode-191330-m02_multinode-191330.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp multinode-191330-m02:/home/docker/cp-test.txt                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03:/home/docker/cp-test_multinode-191330-m02_multinode-191330-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n multinode-191330-m03 sudo cat                                   | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /home/docker/cp-test_multinode-191330-m02_multinode-191330-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp testdata/cp-test.txt                                                | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp multinode-191330-m03:/home/docker/cp-test.txt                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2126939927/001/cp-test_multinode-191330-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp multinode-191330-m03:/home/docker/cp-test.txt                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330:/home/docker/cp-test_multinode-191330-m03_multinode-191330.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n multinode-191330 sudo cat                                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /home/docker/cp-test_multinode-191330-m03_multinode-191330.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp multinode-191330-m03:/home/docker/cp-test.txt                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m02:/home/docker/cp-test_multinode-191330-m03_multinode-191330-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n multinode-191330-m02 sudo cat                                   | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /home/docker/cp-test_multinode-191330-m03_multinode-191330-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-191330 node stop m03                                                          | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	| node    | multinode-191330 node start                                                             | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:17 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-191330                                                                | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:17 UTC |                     |
	| stop    | -p multinode-191330                                                                     | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:17 UTC |                     |
	| start   | -p multinode-191330                                                                     | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:19 UTC | 02 Dec 24 12:22 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-191330                                                                | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 12:19:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 12:19:22.211468   41443 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:19:22.211639   41443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:19:22.211667   41443 out.go:358] Setting ErrFile to fd 2...
	I1202 12:19:22.211690   41443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:19:22.212250   41443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:19:22.212773   41443 out.go:352] Setting JSON to false
	I1202 12:19:22.213620   41443 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3714,"bootTime":1733138248,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:19:22.213699   41443 start.go:139] virtualization: kvm guest
	I1202 12:19:22.219628   41443 out.go:177] * [multinode-191330] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:19:22.224413   41443 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:19:22.224410   41443 notify.go:220] Checking for updates...
	I1202 12:19:22.229662   41443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:19:22.231148   41443 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:19:22.232396   41443 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:19:22.233498   41443 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:19:22.234622   41443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:19:22.236088   41443 config.go:182] Loaded profile config "multinode-191330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:19:22.236176   41443 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:19:22.236616   41443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:19:22.236653   41443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:19:22.251258   41443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I1202 12:19:22.251676   41443 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:19:22.252181   41443 main.go:141] libmachine: Using API Version  1
	I1202 12:19:22.252203   41443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:19:22.252527   41443 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:19:22.252691   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:19:22.286024   41443 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 12:19:22.287195   41443 start.go:297] selected driver: kvm2
	I1202 12:19:22.287207   41443 start.go:901] validating driver "kvm2" against &{Name:multinode-191330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-191330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:19:22.287336   41443 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:19:22.287631   41443 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:19:22.287691   41443 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 12:19:22.301556   41443 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 12:19:22.302185   41443 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:19:22.302210   41443 cni.go:84] Creating CNI manager for ""
	I1202 12:19:22.302259   41443 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 12:19:22.302302   41443 start.go:340] cluster config:
	{Name:multinode-191330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-191330 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:19:22.302455   41443 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:19:22.303959   41443 out.go:177] * Starting "multinode-191330" primary control-plane node in "multinode-191330" cluster
	I1202 12:19:22.305053   41443 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:19:22.305084   41443 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 12:19:22.305093   41443 cache.go:56] Caching tarball of preloaded images
	I1202 12:19:22.305158   41443 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 12:19:22.305168   41443 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 12:19:22.305266   41443 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/config.json ...
	I1202 12:19:22.305432   41443 start.go:360] acquireMachinesLock for multinode-191330: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:19:22.305467   41443 start.go:364] duration metric: took 19.496µs to acquireMachinesLock for "multinode-191330"
	I1202 12:19:22.305483   41443 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:19:22.305491   41443 fix.go:54] fixHost starting: 
	I1202 12:19:22.305727   41443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:19:22.305752   41443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:19:22.318672   41443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I1202 12:19:22.319076   41443 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:19:22.319501   41443 main.go:141] libmachine: Using API Version  1
	I1202 12:19:22.319522   41443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:19:22.319837   41443 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:19:22.319983   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:19:22.320099   41443 main.go:141] libmachine: (multinode-191330) Calling .GetState
	I1202 12:19:22.321348   41443 fix.go:112] recreateIfNeeded on multinode-191330: state=Running err=<nil>
	W1202 12:19:22.321363   41443 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:19:22.323192   41443 out.go:177] * Updating the running kvm2 "multinode-191330" VM ...
	I1202 12:19:22.324657   41443 machine.go:93] provisionDockerMachine start ...
	I1202 12:19:22.324673   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:19:22.324845   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:19:22.327116   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.327522   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.327549   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.327665   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:19:22.327864   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.328012   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.328142   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:19:22.328312   41443 main.go:141] libmachine: Using SSH client type: native
	I1202 12:19:22.328490   41443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1202 12:19:22.328502   41443 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:19:22.432919   41443 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-191330
	
	I1202 12:19:22.432951   41443 main.go:141] libmachine: (multinode-191330) Calling .GetMachineName
	I1202 12:19:22.433183   41443 buildroot.go:166] provisioning hostname "multinode-191330"
	I1202 12:19:22.433207   41443 main.go:141] libmachine: (multinode-191330) Calling .GetMachineName
	I1202 12:19:22.433406   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:19:22.435906   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.436287   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.436308   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.436458   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:19:22.436653   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.436829   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.436950   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:19:22.437116   41443 main.go:141] libmachine: Using SSH client type: native
	I1202 12:19:22.437265   41443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1202 12:19:22.437277   41443 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-191330 && echo "multinode-191330" | sudo tee /etc/hostname
	I1202 12:19:22.557004   41443 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-191330
	
	I1202 12:19:22.557038   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:19:22.559570   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.559935   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.559973   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.560104   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:19:22.560322   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.560462   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.560569   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:19:22.560703   41443 main.go:141] libmachine: Using SSH client type: native
	I1202 12:19:22.560860   41443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1202 12:19:22.560882   41443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-191330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-191330/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-191330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 12:19:22.664877   41443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:19:22.664902   41443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 12:19:22.664918   41443 buildroot.go:174] setting up certificates
	I1202 12:19:22.664927   41443 provision.go:84] configureAuth start
	I1202 12:19:22.664938   41443 main.go:141] libmachine: (multinode-191330) Calling .GetMachineName
	I1202 12:19:22.665221   41443 main.go:141] libmachine: (multinode-191330) Calling .GetIP
	I1202 12:19:22.667507   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.667852   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.667873   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.668037   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:19:22.669990   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.670317   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.670346   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.670482   41443 provision.go:143] copyHostCerts
	I1202 12:19:22.670509   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:19:22.670550   41443 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 12:19:22.670562   41443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:19:22.670624   41443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 12:19:22.670700   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:19:22.670718   41443 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 12:19:22.670722   41443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:19:22.670746   41443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 12:19:22.670786   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:19:22.670803   41443 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 12:19:22.670809   41443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:19:22.670828   41443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 12:19:22.670870   41443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.multinode-191330 san=[127.0.0.1 192.168.39.135 localhost minikube multinode-191330]
	I1202 12:19:22.808036   41443 provision.go:177] copyRemoteCerts
	I1202 12:19:22.808103   41443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 12:19:22.808130   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:19:22.810781   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.811162   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.811190   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.811329   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:19:22.811510   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.811680   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:19:22.811806   41443 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/multinode-191330/id_rsa Username:docker}
	I1202 12:19:22.894671   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 12:19:22.894725   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 12:19:22.921311   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 12:19:22.921368   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1202 12:19:22.947196   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 12:19:22.947247   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 12:19:22.970654   41443 provision.go:87] duration metric: took 305.7188ms to configureAuth
	I1202 12:19:22.970673   41443 buildroot.go:189] setting minikube options for container-runtime
	I1202 12:19:22.970936   41443 config.go:182] Loaded profile config "multinode-191330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:19:22.971022   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:19:22.973563   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.973875   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.973899   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.974154   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:19:22.974323   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.974492   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.974608   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:19:22.974745   41443 main.go:141] libmachine: Using SSH client type: native
	I1202 12:19:22.974889   41443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1202 12:19:22.974903   41443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 12:20:53.628037   41443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 12:20:53.628066   41443 machine.go:96] duration metric: took 1m31.303395112s to provisionDockerMachine
	I1202 12:20:53.628083   41443 start.go:293] postStartSetup for "multinode-191330" (driver="kvm2")
	I1202 12:20:53.628098   41443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 12:20:53.628120   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:20:53.628449   41443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 12:20:53.628486   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:20:53.631447   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.631888   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:20:53.631930   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.632086   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:20:53.632275   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:20:53.632445   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:20:53.632617   41443 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/multinode-191330/id_rsa Username:docker}
	I1202 12:20:53.715084   41443 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 12:20:53.719272   41443 command_runner.go:130] > NAME=Buildroot
	I1202 12:20:53.719287   41443 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1202 12:20:53.719290   41443 command_runner.go:130] > ID=buildroot
	I1202 12:20:53.719297   41443 command_runner.go:130] > VERSION_ID=2023.02.9
	I1202 12:20:53.719318   41443 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1202 12:20:53.719648   41443 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 12:20:53.719664   41443 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 12:20:53.719714   41443 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 12:20:53.719790   41443 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 12:20:53.719801   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 12:20:53.719877   41443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 12:20:53.728594   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:20:53.751723   41443 start.go:296] duration metric: took 123.628453ms for postStartSetup
	I1202 12:20:53.751757   41443 fix.go:56] duration metric: took 1m31.446265018s for fixHost
	I1202 12:20:53.751778   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:20:53.754511   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.754860   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:20:53.754886   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.755034   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:20:53.755201   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:20:53.755358   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:20:53.755450   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:20:53.755599   41443 main.go:141] libmachine: Using SSH client type: native
	I1202 12:20:53.755864   41443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1202 12:20:53.755884   41443 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 12:20:53.856482   41443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733142053.832598558
	
	I1202 12:20:53.856505   41443 fix.go:216] guest clock: 1733142053.832598558
	I1202 12:20:53.856513   41443 fix.go:229] Guest: 2024-12-02 12:20:53.832598558 +0000 UTC Remote: 2024-12-02 12:20:53.75176176 +0000 UTC m=+91.575535379 (delta=80.836798ms)
	I1202 12:20:53.856537   41443 fix.go:200] guest clock delta is within tolerance: 80.836798ms
	I1202 12:20:53.856546   41443 start.go:83] releasing machines lock for "multinode-191330", held for 1m31.551069663s
	I1202 12:20:53.856569   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:20:53.856777   41443 main.go:141] libmachine: (multinode-191330) Calling .GetIP
	I1202 12:20:53.859201   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.859487   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:20:53.859513   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.859668   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:20:53.860173   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:20:53.860351   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:20:53.860459   41443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 12:20:53.860497   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:20:53.860558   41443 ssh_runner.go:195] Run: cat /version.json
	I1202 12:20:53.860583   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:20:53.863150   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.863168   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.863560   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:20:53.863583   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.863608   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:20:53.863625   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.863744   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:20:53.863884   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:20:53.863906   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:20:53.864025   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:20:53.864048   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:20:53.864156   41443 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/multinode-191330/id_rsa Username:docker}
	I1202 12:20:53.864293   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:20:53.864452   41443 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/multinode-191330/id_rsa Username:docker}
	I1202 12:20:53.960506   41443 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 12:20:53.960555   41443 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1202 12:20:53.960689   41443 ssh_runner.go:195] Run: systemctl --version
	I1202 12:20:53.966431   41443 command_runner.go:130] > systemd 252 (252)
	I1202 12:20:53.966461   41443 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1202 12:20:53.966510   41443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 12:20:54.127850   41443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 12:20:54.133681   41443 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 12:20:54.133884   41443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 12:20:54.133938   41443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 12:20:54.143488   41443 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 12:20:54.143507   41443 start.go:495] detecting cgroup driver to use...
	I1202 12:20:54.143561   41443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 12:20:54.162989   41443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 12:20:54.176501   41443 docker.go:217] disabling cri-docker service (if available) ...
	I1202 12:20:54.176558   41443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 12:20:54.189549   41443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 12:20:54.203799   41443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 12:20:54.361987   41443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 12:20:54.509709   41443 docker.go:233] disabling docker service ...
	I1202 12:20:54.509800   41443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 12:20:54.526356   41443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 12:20:54.539835   41443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 12:20:54.680464   41443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 12:20:54.820038   41443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 12:20:54.834659   41443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 12:20:54.852642   41443 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1202 12:20:54.853254   41443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 12:20:54.853310   41443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.863564   41443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 12:20:54.863616   41443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.874918   41443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.886024   41443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.896792   41443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 12:20:54.907484   41443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.918243   41443 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.928880   41443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.939068   41443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 12:20:54.948196   41443 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 12:20:54.948257   41443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 12:20:54.957308   41443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:20:55.093641   41443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 12:20:55.287478   41443 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 12:20:55.287552   41443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 12:20:55.292589   41443 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1202 12:20:55.292609   41443 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 12:20:55.292615   41443 command_runner.go:130] > Device: 0,22	Inode: 1278        Links: 1
	I1202 12:20:55.292622   41443 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 12:20:55.292627   41443 command_runner.go:130] > Access: 2024-12-02 12:20:55.157624467 +0000
	I1202 12:20:55.292632   41443 command_runner.go:130] > Modify: 2024-12-02 12:20:55.157624467 +0000
	I1202 12:20:55.292637   41443 command_runner.go:130] > Change: 2024-12-02 12:20:55.157624467 +0000
	I1202 12:20:55.292640   41443 command_runner.go:130] >  Birth: -
	I1202 12:20:55.292827   41443 start.go:563] Will wait 60s for crictl version
	I1202 12:20:55.292890   41443 ssh_runner.go:195] Run: which crictl
	I1202 12:20:55.296865   41443 command_runner.go:130] > /usr/bin/crictl
	I1202 12:20:55.296907   41443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 12:20:55.334618   41443 command_runner.go:130] > Version:  0.1.0
	I1202 12:20:55.334634   41443 command_runner.go:130] > RuntimeName:  cri-o
	I1202 12:20:55.334639   41443 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1202 12:20:55.334644   41443 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 12:20:55.335667   41443 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 12:20:55.335735   41443 ssh_runner.go:195] Run: crio --version
	I1202 12:20:55.363933   41443 command_runner.go:130] > crio version 1.29.1
	I1202 12:20:55.363953   41443 command_runner.go:130] > Version:        1.29.1
	I1202 12:20:55.363961   41443 command_runner.go:130] > GitCommit:      unknown
	I1202 12:20:55.363976   41443 command_runner.go:130] > GitCommitDate:  unknown
	I1202 12:20:55.363983   41443 command_runner.go:130] > GitTreeState:   clean
	I1202 12:20:55.363992   41443 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1202 12:20:55.363996   41443 command_runner.go:130] > GoVersion:      go1.21.6
	I1202 12:20:55.364000   41443 command_runner.go:130] > Compiler:       gc
	I1202 12:20:55.364008   41443 command_runner.go:130] > Platform:       linux/amd64
	I1202 12:20:55.364011   41443 command_runner.go:130] > Linkmode:       dynamic
	I1202 12:20:55.364020   41443 command_runner.go:130] > BuildTags:      
	I1202 12:20:55.364028   41443 command_runner.go:130] >   containers_image_ostree_stub
	I1202 12:20:55.364032   41443 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1202 12:20:55.364036   41443 command_runner.go:130] >   btrfs_noversion
	I1202 12:20:55.364040   41443 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1202 12:20:55.364049   41443 command_runner.go:130] >   libdm_no_deferred_remove
	I1202 12:20:55.364052   41443 command_runner.go:130] >   seccomp
	I1202 12:20:55.364056   41443 command_runner.go:130] > LDFlags:          unknown
	I1202 12:20:55.364061   41443 command_runner.go:130] > SeccompEnabled:   true
	I1202 12:20:55.364065   41443 command_runner.go:130] > AppArmorEnabled:  false
	I1202 12:20:55.364126   41443 ssh_runner.go:195] Run: crio --version
	I1202 12:20:55.393458   41443 command_runner.go:130] > crio version 1.29.1
	I1202 12:20:55.393476   41443 command_runner.go:130] > Version:        1.29.1
	I1202 12:20:55.393484   41443 command_runner.go:130] > GitCommit:      unknown
	I1202 12:20:55.393490   41443 command_runner.go:130] > GitCommitDate:  unknown
	I1202 12:20:55.393496   41443 command_runner.go:130] > GitTreeState:   clean
	I1202 12:20:55.393504   41443 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1202 12:20:55.393510   41443 command_runner.go:130] > GoVersion:      go1.21.6
	I1202 12:20:55.393518   41443 command_runner.go:130] > Compiler:       gc
	I1202 12:20:55.393529   41443 command_runner.go:130] > Platform:       linux/amd64
	I1202 12:20:55.393540   41443 command_runner.go:130] > Linkmode:       dynamic
	I1202 12:20:55.393550   41443 command_runner.go:130] > BuildTags:      
	I1202 12:20:55.393559   41443 command_runner.go:130] >   containers_image_ostree_stub
	I1202 12:20:55.393567   41443 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1202 12:20:55.393574   41443 command_runner.go:130] >   btrfs_noversion
	I1202 12:20:55.393583   41443 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1202 12:20:55.393594   41443 command_runner.go:130] >   libdm_no_deferred_remove
	I1202 12:20:55.393600   41443 command_runner.go:130] >   seccomp
	I1202 12:20:55.393609   41443 command_runner.go:130] > LDFlags:          unknown
	I1202 12:20:55.393619   41443 command_runner.go:130] > SeccompEnabled:   true
	I1202 12:20:55.393627   41443 command_runner.go:130] > AppArmorEnabled:  false
	I1202 12:20:55.396140   41443 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 12:20:55.397217   41443 main.go:141] libmachine: (multinode-191330) Calling .GetIP
	I1202 12:20:55.400029   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:55.400443   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:20:55.400466   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:55.400636   41443 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 12:20:55.404813   41443 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1202 12:20:55.404972   41443 kubeadm.go:883] updating cluster {Name:multinode-191330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-191330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 12:20:55.405098   41443 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:20:55.405153   41443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:20:55.448656   41443 command_runner.go:130] > {
	I1202 12:20:55.448679   41443 command_runner.go:130] >   "images": [
	I1202 12:20:55.448686   41443 command_runner.go:130] >     {
	I1202 12:20:55.448696   41443 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1202 12:20:55.448701   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.448707   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1202 12:20:55.448711   41443 command_runner.go:130] >       ],
	I1202 12:20:55.448716   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.448724   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1202 12:20:55.448733   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1202 12:20:55.448737   41443 command_runner.go:130] >       ],
	I1202 12:20:55.448742   41443 command_runner.go:130] >       "size": "94965812",
	I1202 12:20:55.448750   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.448757   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.448768   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.448777   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.448785   41443 command_runner.go:130] >     },
	I1202 12:20:55.448789   41443 command_runner.go:130] >     {
	I1202 12:20:55.448798   41443 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1202 12:20:55.448802   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.448808   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1202 12:20:55.448812   41443 command_runner.go:130] >       ],
	I1202 12:20:55.448816   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.448822   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1202 12:20:55.448836   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1202 12:20:55.448843   41443 command_runner.go:130] >       ],
	I1202 12:20:55.448849   41443 command_runner.go:130] >       "size": "94958644",
	I1202 12:20:55.448856   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.448872   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.448881   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.448888   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.448896   41443 command_runner.go:130] >     },
	I1202 12:20:55.448905   41443 command_runner.go:130] >     {
	I1202 12:20:55.448914   41443 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1202 12:20:55.448923   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.448936   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1202 12:20:55.448944   41443 command_runner.go:130] >       ],
	I1202 12:20:55.448951   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.448966   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1202 12:20:55.448981   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1202 12:20:55.448989   41443 command_runner.go:130] >       ],
	I1202 12:20:55.448994   41443 command_runner.go:130] >       "size": "1363676",
	I1202 12:20:55.449001   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.449008   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.449018   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449027   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449035   41443 command_runner.go:130] >     },
	I1202 12:20:55.449040   41443 command_runner.go:130] >     {
	I1202 12:20:55.449052   41443 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1202 12:20:55.449062   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449073   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 12:20:55.449080   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449085   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449101   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1202 12:20:55.449128   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1202 12:20:55.449138   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449145   41443 command_runner.go:130] >       "size": "31470524",
	I1202 12:20:55.449155   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.449164   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.449173   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449180   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449189   41443 command_runner.go:130] >     },
	I1202 12:20:55.449198   41443 command_runner.go:130] >     {
	I1202 12:20:55.449211   41443 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1202 12:20:55.449220   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449237   41443 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1202 12:20:55.449245   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449250   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449263   41443 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1202 12:20:55.449275   41443 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1202 12:20:55.449284   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449290   41443 command_runner.go:130] >       "size": "63273227",
	I1202 12:20:55.449300   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.449307   41443 command_runner.go:130] >       "username": "nonroot",
	I1202 12:20:55.449317   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449326   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449331   41443 command_runner.go:130] >     },
	I1202 12:20:55.449338   41443 command_runner.go:130] >     {
	I1202 12:20:55.449351   41443 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1202 12:20:55.449368   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449379   41443 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1202 12:20:55.449388   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449397   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449411   41443 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1202 12:20:55.449423   41443 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1202 12:20:55.449429   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449436   41443 command_runner.go:130] >       "size": "149009664",
	I1202 12:20:55.449447   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.449457   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.449465   41443 command_runner.go:130] >       },
	I1202 12:20:55.449472   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.449481   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449491   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449497   41443 command_runner.go:130] >     },
	I1202 12:20:55.449505   41443 command_runner.go:130] >     {
	I1202 12:20:55.449512   41443 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1202 12:20:55.449520   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449528   41443 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1202 12:20:55.449543   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449553   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449568   41443 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1202 12:20:55.449581   41443 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1202 12:20:55.449589   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449594   41443 command_runner.go:130] >       "size": "95274464",
	I1202 12:20:55.449599   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.449604   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.449614   41443 command_runner.go:130] >       },
	I1202 12:20:55.449624   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.449634   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449643   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449651   41443 command_runner.go:130] >     },
	I1202 12:20:55.449656   41443 command_runner.go:130] >     {
	I1202 12:20:55.449670   41443 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1202 12:20:55.449678   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449684   41443 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1202 12:20:55.449698   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449710   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449739   41443 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1202 12:20:55.449753   41443 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1202 12:20:55.449762   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449766   41443 command_runner.go:130] >       "size": "89474374",
	I1202 12:20:55.449769   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.449778   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.449784   41443 command_runner.go:130] >       },
	I1202 12:20:55.449794   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.449803   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449809   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449815   41443 command_runner.go:130] >     },
	I1202 12:20:55.449820   41443 command_runner.go:130] >     {
	I1202 12:20:55.449830   41443 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1202 12:20:55.449836   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449851   41443 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1202 12:20:55.449855   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449858   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449869   41443 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1202 12:20:55.449881   41443 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1202 12:20:55.449887   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449894   41443 command_runner.go:130] >       "size": "92783513",
	I1202 12:20:55.449906   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.449913   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.449919   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449925   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449929   41443 command_runner.go:130] >     },
	I1202 12:20:55.449933   41443 command_runner.go:130] >     {
	I1202 12:20:55.449943   41443 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1202 12:20:55.449953   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449961   41443 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1202 12:20:55.449969   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449975   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449986   41443 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1202 12:20:55.449997   41443 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1202 12:20:55.450006   41443 command_runner.go:130] >       ],
	I1202 12:20:55.450012   41443 command_runner.go:130] >       "size": "68457798",
	I1202 12:20:55.450017   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.450024   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.450029   41443 command_runner.go:130] >       },
	I1202 12:20:55.450035   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.450044   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.450051   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.450057   41443 command_runner.go:130] >     },
	I1202 12:20:55.450063   41443 command_runner.go:130] >     {
	I1202 12:20:55.450072   41443 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1202 12:20:55.450084   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.450092   41443 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1202 12:20:55.450105   41443 command_runner.go:130] >       ],
	I1202 12:20:55.450111   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.450124   41443 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1202 12:20:55.450139   41443 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1202 12:20:55.450146   41443 command_runner.go:130] >       ],
	I1202 12:20:55.450151   41443 command_runner.go:130] >       "size": "742080",
	I1202 12:20:55.450156   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.450165   41443 command_runner.go:130] >         "value": "65535"
	I1202 12:20:55.450175   41443 command_runner.go:130] >       },
	I1202 12:20:55.450184   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.450191   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.450201   41443 command_runner.go:130] >       "pinned": true
	I1202 12:20:55.450206   41443 command_runner.go:130] >     }
	I1202 12:20:55.450210   41443 command_runner.go:130] >   ]
	I1202 12:20:55.450219   41443 command_runner.go:130] > }
	I1202 12:20:55.450450   41443 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 12:20:55.450463   41443 crio.go:433] Images already preloaded, skipping extraction
	I1202 12:20:55.450504   41443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:20:55.486596   41443 command_runner.go:130] > {
	I1202 12:20:55.486608   41443 command_runner.go:130] >   "images": [
	I1202 12:20:55.486612   41443 command_runner.go:130] >     {
	I1202 12:20:55.486620   41443 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1202 12:20:55.486624   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.486635   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1202 12:20:55.486638   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486643   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.486654   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1202 12:20:55.486662   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1202 12:20:55.486665   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486670   41443 command_runner.go:130] >       "size": "94965812",
	I1202 12:20:55.486675   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.486678   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.486685   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.486690   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.486692   41443 command_runner.go:130] >     },
	I1202 12:20:55.486696   41443 command_runner.go:130] >     {
	I1202 12:20:55.486702   41443 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1202 12:20:55.486709   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.486714   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1202 12:20:55.486717   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486722   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.486732   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1202 12:20:55.486744   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1202 12:20:55.486753   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486760   41443 command_runner.go:130] >       "size": "94958644",
	I1202 12:20:55.486766   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.486777   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.486786   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.486793   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.486807   41443 command_runner.go:130] >     },
	I1202 12:20:55.486815   41443 command_runner.go:130] >     {
	I1202 12:20:55.486821   41443 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1202 12:20:55.486824   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.486833   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1202 12:20:55.486842   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486849   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.486864   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1202 12:20:55.486876   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1202 12:20:55.486880   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486884   41443 command_runner.go:130] >       "size": "1363676",
	I1202 12:20:55.486889   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.486893   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.486901   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.486908   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.486911   41443 command_runner.go:130] >     },
	I1202 12:20:55.486916   41443 command_runner.go:130] >     {
	I1202 12:20:55.486928   41443 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1202 12:20:55.486939   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.486953   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 12:20:55.486962   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486971   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.486984   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1202 12:20:55.486999   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1202 12:20:55.487008   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487019   41443 command_runner.go:130] >       "size": "31470524",
	I1202 12:20:55.487029   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.487038   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.487045   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.487051   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.487059   41443 command_runner.go:130] >     },
	I1202 12:20:55.487065   41443 command_runner.go:130] >     {
	I1202 12:20:55.487077   41443 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1202 12:20:55.487089   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.487102   41443 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1202 12:20:55.487112   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487118   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.487133   41443 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1202 12:20:55.487148   41443 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1202 12:20:55.487157   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487166   41443 command_runner.go:130] >       "size": "63273227",
	I1202 12:20:55.487175   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.487182   41443 command_runner.go:130] >       "username": "nonroot",
	I1202 12:20:55.487193   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.487202   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.487211   41443 command_runner.go:130] >     },
	I1202 12:20:55.487219   41443 command_runner.go:130] >     {
	I1202 12:20:55.487231   41443 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1202 12:20:55.487240   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.487250   41443 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1202 12:20:55.487256   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487262   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.487277   41443 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1202 12:20:55.487292   41443 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1202 12:20:55.487301   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487311   41443 command_runner.go:130] >       "size": "149009664",
	I1202 12:20:55.487320   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.487330   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.487339   41443 command_runner.go:130] >       },
	I1202 12:20:55.487347   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.487353   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.487373   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.487379   41443 command_runner.go:130] >     },
	I1202 12:20:55.487384   41443 command_runner.go:130] >     {
	I1202 12:20:55.487396   41443 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1202 12:20:55.487405   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.487425   41443 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1202 12:20:55.487434   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487440   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.487456   41443 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1202 12:20:55.487471   41443 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1202 12:20:55.487479   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487489   41443 command_runner.go:130] >       "size": "95274464",
	I1202 12:20:55.487498   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.487507   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.487513   41443 command_runner.go:130] >       },
	I1202 12:20:55.487518   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.487527   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.487538   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.487547   41443 command_runner.go:130] >     },
	I1202 12:20:55.487555   41443 command_runner.go:130] >     {
	I1202 12:20:55.487568   41443 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1202 12:20:55.487577   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.487585   41443 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1202 12:20:55.487593   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487597   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.487624   41443 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1202 12:20:55.487640   41443 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1202 12:20:55.487646   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487653   41443 command_runner.go:130] >       "size": "89474374",
	I1202 12:20:55.487662   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.487670   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.487679   41443 command_runner.go:130] >       },
	I1202 12:20:55.487683   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.487690   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.487697   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.487706   41443 command_runner.go:130] >     },
	I1202 12:20:55.487716   41443 command_runner.go:130] >     {
	I1202 12:20:55.487729   41443 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1202 12:20:55.487744   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.487755   41443 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1202 12:20:55.487763   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487767   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.487779   41443 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1202 12:20:55.487798   41443 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1202 12:20:55.487807   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487817   41443 command_runner.go:130] >       "size": "92783513",
	I1202 12:20:55.487826   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.487835   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.487845   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.487853   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.487859   41443 command_runner.go:130] >     },
	I1202 12:20:55.487863   41443 command_runner.go:130] >     {
	I1202 12:20:55.487877   41443 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1202 12:20:55.487887   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.487898   41443 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1202 12:20:55.487907   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487916   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.487931   41443 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1202 12:20:55.487942   41443 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1202 12:20:55.487949   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487956   41443 command_runner.go:130] >       "size": "68457798",
	I1202 12:20:55.487966   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.487975   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.487984   41443 command_runner.go:130] >       },
	I1202 12:20:55.487993   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.488001   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.488008   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.488017   41443 command_runner.go:130] >     },
	I1202 12:20:55.488024   41443 command_runner.go:130] >     {
	I1202 12:20:55.488030   41443 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1202 12:20:55.488039   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.488056   41443 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1202 12:20:55.488065   41443 command_runner.go:130] >       ],
	I1202 12:20:55.488075   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.488089   41443 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1202 12:20:55.488103   41443 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1202 12:20:55.488111   41443 command_runner.go:130] >       ],
	I1202 12:20:55.488115   41443 command_runner.go:130] >       "size": "742080",
	I1202 12:20:55.488119   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.488129   41443 command_runner.go:130] >         "value": "65535"
	I1202 12:20:55.488138   41443 command_runner.go:130] >       },
	I1202 12:20:55.488148   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.488157   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.488166   41443 command_runner.go:130] >       "pinned": true
	I1202 12:20:55.488174   41443 command_runner.go:130] >     }
	I1202 12:20:55.488183   41443 command_runner.go:130] >   ]
	I1202 12:20:55.488188   41443 command_runner.go:130] > }
	I1202 12:20:55.488379   41443 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 12:20:55.488394   41443 cache_images.go:84] Images are preloaded, skipping loading
	I1202 12:20:55.488403   41443 kubeadm.go:934] updating node { 192.168.39.135 8443 v1.31.2 crio true true} ...
	I1202 12:20:55.488521   41443 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-191330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-191330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 12:20:55.488609   41443 ssh_runner.go:195] Run: crio config
	I1202 12:20:55.521071   41443 command_runner.go:130] ! time="2024-12-02 12:20:55.497300210Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1202 12:20:55.532339   41443 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1202 12:20:55.539146   41443 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1202 12:20:55.539169   41443 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1202 12:20:55.539179   41443 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1202 12:20:55.539184   41443 command_runner.go:130] > #
	I1202 12:20:55.539205   41443 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1202 12:20:55.539221   41443 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1202 12:20:55.539231   41443 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1202 12:20:55.539248   41443 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1202 12:20:55.539258   41443 command_runner.go:130] > # reload'.
	I1202 12:20:55.539270   41443 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1202 12:20:55.539280   41443 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1202 12:20:55.539288   41443 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1202 12:20:55.539299   41443 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1202 12:20:55.539306   41443 command_runner.go:130] > [crio]
	I1202 12:20:55.539311   41443 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1202 12:20:55.539318   41443 command_runner.go:130] > # containers images, in this directory.
	I1202 12:20:55.539325   41443 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1202 12:20:55.539336   41443 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1202 12:20:55.539343   41443 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1202 12:20:55.539350   41443 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1202 12:20:55.539356   41443 command_runner.go:130] > # imagestore = ""
	I1202 12:20:55.539364   41443 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1202 12:20:55.539372   41443 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1202 12:20:55.539379   41443 command_runner.go:130] > storage_driver = "overlay"
	I1202 12:20:55.539384   41443 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1202 12:20:55.539394   41443 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1202 12:20:55.539400   41443 command_runner.go:130] > storage_option = [
	I1202 12:20:55.539405   41443 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1202 12:20:55.539410   41443 command_runner.go:130] > ]
	I1202 12:20:55.539417   41443 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1202 12:20:55.539425   41443 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1202 12:20:55.539432   41443 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1202 12:20:55.539437   41443 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1202 12:20:55.539445   41443 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1202 12:20:55.539452   41443 command_runner.go:130] > # always happen on a node reboot
	I1202 12:20:55.539457   41443 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1202 12:20:55.539470   41443 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1202 12:20:55.539478   41443 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1202 12:20:55.539483   41443 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1202 12:20:55.539490   41443 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1202 12:20:55.539497   41443 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1202 12:20:55.539506   41443 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1202 12:20:55.539512   41443 command_runner.go:130] > # internal_wipe = true
	I1202 12:20:55.539519   41443 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1202 12:20:55.539527   41443 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1202 12:20:55.539541   41443 command_runner.go:130] > # internal_repair = false
	I1202 12:20:55.539549   41443 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1202 12:20:55.539557   41443 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1202 12:20:55.539566   41443 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1202 12:20:55.539578   41443 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1202 12:20:55.539590   41443 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1202 12:20:55.539599   41443 command_runner.go:130] > [crio.api]
	I1202 12:20:55.539611   41443 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1202 12:20:55.539622   41443 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1202 12:20:55.539633   41443 command_runner.go:130] > # IP address on which the stream server will listen.
	I1202 12:20:55.539643   41443 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1202 12:20:55.539653   41443 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1202 12:20:55.539660   41443 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1202 12:20:55.539664   41443 command_runner.go:130] > # stream_port = "0"
	I1202 12:20:55.539669   41443 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1202 12:20:55.539676   41443 command_runner.go:130] > # stream_enable_tls = false
	I1202 12:20:55.539682   41443 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1202 12:20:55.539688   41443 command_runner.go:130] > # stream_idle_timeout = ""
	I1202 12:20:55.539697   41443 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1202 12:20:55.539705   41443 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1202 12:20:55.539710   41443 command_runner.go:130] > # minutes.
	I1202 12:20:55.539714   41443 command_runner.go:130] > # stream_tls_cert = ""
	I1202 12:20:55.539722   41443 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1202 12:20:55.539728   41443 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1202 12:20:55.539734   41443 command_runner.go:130] > # stream_tls_key = ""
	I1202 12:20:55.539740   41443 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1202 12:20:55.539748   41443 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1202 12:20:55.539764   41443 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1202 12:20:55.539771   41443 command_runner.go:130] > # stream_tls_ca = ""
	I1202 12:20:55.539778   41443 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 12:20:55.539785   41443 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1202 12:20:55.539791   41443 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 12:20:55.539798   41443 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1202 12:20:55.539808   41443 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1202 12:20:55.539816   41443 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1202 12:20:55.539823   41443 command_runner.go:130] > [crio.runtime]
	I1202 12:20:55.539828   41443 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1202 12:20:55.539836   41443 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1202 12:20:55.539840   41443 command_runner.go:130] > # "nofile=1024:2048"
	I1202 12:20:55.539848   41443 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1202 12:20:55.539851   41443 command_runner.go:130] > # default_ulimits = [
	I1202 12:20:55.539857   41443 command_runner.go:130] > # ]
	I1202 12:20:55.539862   41443 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1202 12:20:55.539868   41443 command_runner.go:130] > # no_pivot = false
	I1202 12:20:55.539874   41443 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1202 12:20:55.539886   41443 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1202 12:20:55.539894   41443 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1202 12:20:55.539899   41443 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1202 12:20:55.539906   41443 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1202 12:20:55.539912   41443 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 12:20:55.539919   41443 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1202 12:20:55.539923   41443 command_runner.go:130] > # Cgroup setting for conmon
	I1202 12:20:55.539931   41443 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1202 12:20:55.539937   41443 command_runner.go:130] > conmon_cgroup = "pod"
	I1202 12:20:55.539943   41443 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1202 12:20:55.539950   41443 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1202 12:20:55.539959   41443 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 12:20:55.539965   41443 command_runner.go:130] > conmon_env = [
	I1202 12:20:55.539970   41443 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1202 12:20:55.539975   41443 command_runner.go:130] > ]
	I1202 12:20:55.539990   41443 command_runner.go:130] > # Additional environment variables to set for all the
	I1202 12:20:55.539998   41443 command_runner.go:130] > # containers. These are overridden if set in the
	I1202 12:20:55.540003   41443 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1202 12:20:55.540009   41443 command_runner.go:130] > # default_env = [
	I1202 12:20:55.540013   41443 command_runner.go:130] > # ]
	I1202 12:20:55.540021   41443 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1202 12:20:55.540035   41443 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1202 12:20:55.540041   41443 command_runner.go:130] > # selinux = false
	I1202 12:20:55.540047   41443 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1202 12:20:55.540055   41443 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1202 12:20:55.540063   41443 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1202 12:20:55.540067   41443 command_runner.go:130] > # seccomp_profile = ""
	I1202 12:20:55.540074   41443 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1202 12:20:55.540080   41443 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1202 12:20:55.540087   41443 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1202 12:20:55.540091   41443 command_runner.go:130] > # which might increase security.
	I1202 12:20:55.540098   41443 command_runner.go:130] > # This option is currently deprecated,
	I1202 12:20:55.540104   41443 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1202 12:20:55.540115   41443 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1202 12:20:55.540123   41443 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1202 12:20:55.540131   41443 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1202 12:20:55.540137   41443 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1202 12:20:55.540145   41443 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1202 12:20:55.540151   41443 command_runner.go:130] > # This option supports live configuration reload.
	I1202 12:20:55.540158   41443 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1202 12:20:55.540163   41443 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1202 12:20:55.540170   41443 command_runner.go:130] > # the cgroup blockio controller.
	I1202 12:20:55.540174   41443 command_runner.go:130] > # blockio_config_file = ""
	I1202 12:20:55.540183   41443 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1202 12:20:55.540189   41443 command_runner.go:130] > # blockio parameters.
	I1202 12:20:55.540193   41443 command_runner.go:130] > # blockio_reload = false
	I1202 12:20:55.540199   41443 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1202 12:20:55.540205   41443 command_runner.go:130] > # irqbalance daemon.
	I1202 12:20:55.540210   41443 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1202 12:20:55.540222   41443 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1202 12:20:55.540243   41443 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1202 12:20:55.540258   41443 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1202 12:20:55.540267   41443 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1202 12:20:55.540273   41443 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1202 12:20:55.540285   41443 command_runner.go:130] > # This option supports live configuration reload.
	I1202 12:20:55.540291   41443 command_runner.go:130] > # rdt_config_file = ""
	I1202 12:20:55.540297   41443 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1202 12:20:55.540303   41443 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1202 12:20:55.540339   41443 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1202 12:20:55.540348   41443 command_runner.go:130] > # separate_pull_cgroup = ""
	I1202 12:20:55.540353   41443 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1202 12:20:55.540359   41443 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1202 12:20:55.540367   41443 command_runner.go:130] > # will be added.
	I1202 12:20:55.540371   41443 command_runner.go:130] > # default_capabilities = [
	I1202 12:20:55.540377   41443 command_runner.go:130] > # 	"CHOWN",
	I1202 12:20:55.540381   41443 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1202 12:20:55.540387   41443 command_runner.go:130] > # 	"FSETID",
	I1202 12:20:55.540390   41443 command_runner.go:130] > # 	"FOWNER",
	I1202 12:20:55.540397   41443 command_runner.go:130] > # 	"SETGID",
	I1202 12:20:55.540400   41443 command_runner.go:130] > # 	"SETUID",
	I1202 12:20:55.540406   41443 command_runner.go:130] > # 	"SETPCAP",
	I1202 12:20:55.540411   41443 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1202 12:20:55.540416   41443 command_runner.go:130] > # 	"KILL",
	I1202 12:20:55.540419   41443 command_runner.go:130] > # ]
	I1202 12:20:55.540429   41443 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1202 12:20:55.540437   41443 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1202 12:20:55.540443   41443 command_runner.go:130] > # add_inheritable_capabilities = false
	I1202 12:20:55.540448   41443 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1202 12:20:55.540456   41443 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 12:20:55.540459   41443 command_runner.go:130] > default_sysctls = [
	I1202 12:20:55.540466   41443 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1202 12:20:55.540469   41443 command_runner.go:130] > ]
	I1202 12:20:55.540476   41443 command_runner.go:130] > # List of devices on the host that a
	I1202 12:20:55.540482   41443 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1202 12:20:55.540488   41443 command_runner.go:130] > # allowed_devices = [
	I1202 12:20:55.540492   41443 command_runner.go:130] > # 	"/dev/fuse",
	I1202 12:20:55.540497   41443 command_runner.go:130] > # ]
	I1202 12:20:55.540506   41443 command_runner.go:130] > # List of additional devices. specified as
	I1202 12:20:55.540515   41443 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1202 12:20:55.540523   41443 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1202 12:20:55.540534   41443 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 12:20:55.540540   41443 command_runner.go:130] > # additional_devices = [
	I1202 12:20:55.540544   41443 command_runner.go:130] > # ]
	I1202 12:20:55.540551   41443 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1202 12:20:55.540555   41443 command_runner.go:130] > # cdi_spec_dirs = [
	I1202 12:20:55.540561   41443 command_runner.go:130] > # 	"/etc/cdi",
	I1202 12:20:55.540566   41443 command_runner.go:130] > # 	"/var/run/cdi",
	I1202 12:20:55.540574   41443 command_runner.go:130] > # ]
	I1202 12:20:55.540587   41443 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1202 12:20:55.540599   41443 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1202 12:20:55.540609   41443 command_runner.go:130] > # Defaults to false.
	I1202 12:20:55.540617   41443 command_runner.go:130] > # device_ownership_from_security_context = false
	I1202 12:20:55.540630   41443 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1202 12:20:55.540642   41443 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1202 12:20:55.540659   41443 command_runner.go:130] > # hooks_dir = [
	I1202 12:20:55.540670   41443 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1202 12:20:55.540676   41443 command_runner.go:130] > # ]
	I1202 12:20:55.540682   41443 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1202 12:20:55.540690   41443 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1202 12:20:55.540696   41443 command_runner.go:130] > # its default mounts from the following two files:
	I1202 12:20:55.540698   41443 command_runner.go:130] > #
	I1202 12:20:55.540704   41443 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1202 12:20:55.540712   41443 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1202 12:20:55.540720   41443 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1202 12:20:55.540728   41443 command_runner.go:130] > #
	I1202 12:20:55.540742   41443 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1202 12:20:55.540755   41443 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1202 12:20:55.540768   41443 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1202 12:20:55.540778   41443 command_runner.go:130] > #      only add mounts it finds in this file.
	I1202 12:20:55.540786   41443 command_runner.go:130] > #
	I1202 12:20:55.540800   41443 command_runner.go:130] > # default_mounts_file = ""
	I1202 12:20:55.540812   41443 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1202 12:20:55.540826   41443 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1202 12:20:55.540835   41443 command_runner.go:130] > pids_limit = 1024
	I1202 12:20:55.540847   41443 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1202 12:20:55.540855   41443 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1202 12:20:55.540863   41443 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1202 12:20:55.540873   41443 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1202 12:20:55.540879   41443 command_runner.go:130] > # log_size_max = -1
	I1202 12:20:55.540888   41443 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1202 12:20:55.540895   41443 command_runner.go:130] > # log_to_journald = false
	I1202 12:20:55.540901   41443 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1202 12:20:55.540907   41443 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1202 12:20:55.540912   41443 command_runner.go:130] > # Path to directory for container attach sockets.
	I1202 12:20:55.540919   41443 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1202 12:20:55.540924   41443 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1202 12:20:55.540930   41443 command_runner.go:130] > # bind_mount_prefix = ""
	I1202 12:20:55.540936   41443 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1202 12:20:55.540941   41443 command_runner.go:130] > # read_only = false
	I1202 12:20:55.540951   41443 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1202 12:20:55.540964   41443 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1202 12:20:55.540973   41443 command_runner.go:130] > # live configuration reload.
	I1202 12:20:55.540979   41443 command_runner.go:130] > # log_level = "info"
	I1202 12:20:55.540991   41443 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1202 12:20:55.541002   41443 command_runner.go:130] > # This option supports live configuration reload.
	I1202 12:20:55.541013   41443 command_runner.go:130] > # log_filter = ""
	I1202 12:20:55.541025   41443 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1202 12:20:55.541038   41443 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1202 12:20:55.541047   41443 command_runner.go:130] > # separated by comma.
	I1202 12:20:55.541062   41443 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 12:20:55.541072   41443 command_runner.go:130] > # uid_mappings = ""
	I1202 12:20:55.541081   41443 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1202 12:20:55.541094   41443 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1202 12:20:55.541115   41443 command_runner.go:130] > # separated by comma.
	I1202 12:20:55.541134   41443 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 12:20:55.541143   41443 command_runner.go:130] > # gid_mappings = ""
	I1202 12:20:55.541152   41443 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1202 12:20:55.541160   41443 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 12:20:55.541166   41443 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 12:20:55.541176   41443 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 12:20:55.541182   41443 command_runner.go:130] > # minimum_mappable_uid = -1
	I1202 12:20:55.541188   41443 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1202 12:20:55.541196   41443 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 12:20:55.541202   41443 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 12:20:55.541212   41443 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 12:20:55.541221   41443 command_runner.go:130] > # minimum_mappable_gid = -1
	I1202 12:20:55.541229   41443 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1202 12:20:55.541234   41443 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1202 12:20:55.541242   41443 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1202 12:20:55.541245   41443 command_runner.go:130] > # ctr_stop_timeout = 30
	I1202 12:20:55.541254   41443 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1202 12:20:55.541259   41443 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1202 12:20:55.541266   41443 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1202 12:20:55.541271   41443 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1202 12:20:55.541277   41443 command_runner.go:130] > drop_infra_ctr = false
	I1202 12:20:55.541283   41443 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1202 12:20:55.541291   41443 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1202 12:20:55.541301   41443 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1202 12:20:55.541307   41443 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1202 12:20:55.541313   41443 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1202 12:20:55.541322   41443 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1202 12:20:55.541327   41443 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1202 12:20:55.541334   41443 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1202 12:20:55.541338   41443 command_runner.go:130] > # shared_cpuset = ""
	I1202 12:20:55.541346   41443 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1202 12:20:55.541350   41443 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1202 12:20:55.541365   41443 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1202 12:20:55.541375   41443 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1202 12:20:55.541381   41443 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1202 12:20:55.541386   41443 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1202 12:20:55.541395   41443 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1202 12:20:55.541402   41443 command_runner.go:130] > # enable_criu_support = false
	I1202 12:20:55.541407   41443 command_runner.go:130] > # Enable/disable the generation of the container,
	I1202 12:20:55.541414   41443 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1202 12:20:55.541421   41443 command_runner.go:130] > # enable_pod_events = false
	I1202 12:20:55.541428   41443 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 12:20:55.541440   41443 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 12:20:55.541452   41443 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1202 12:20:55.541461   41443 command_runner.go:130] > # default_runtime = "runc"
	I1202 12:20:55.541472   41443 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1202 12:20:55.541487   41443 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1202 12:20:55.541504   41443 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1202 12:20:55.541514   41443 command_runner.go:130] > # creation as a file is not desired either.
	I1202 12:20:55.541524   41443 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1202 12:20:55.541531   41443 command_runner.go:130] > # the hostname is being managed dynamically.
	I1202 12:20:55.541535   41443 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1202 12:20:55.541541   41443 command_runner.go:130] > # ]
	I1202 12:20:55.541547   41443 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1202 12:20:55.541555   41443 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1202 12:20:55.541560   41443 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1202 12:20:55.541572   41443 command_runner.go:130] > # Each entry in the table should follow the format:
	I1202 12:20:55.541577   41443 command_runner.go:130] > #
	I1202 12:20:55.541584   41443 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1202 12:20:55.541592   41443 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1202 12:20:55.541654   41443 command_runner.go:130] > # runtime_type = "oci"
	I1202 12:20:55.541665   41443 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1202 12:20:55.541671   41443 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1202 12:20:55.541675   41443 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1202 12:20:55.541680   41443 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1202 12:20:55.541689   41443 command_runner.go:130] > # monitor_env = []
	I1202 12:20:55.541696   41443 command_runner.go:130] > # privileged_without_host_devices = false
	I1202 12:20:55.541700   41443 command_runner.go:130] > # allowed_annotations = []
	I1202 12:20:55.541705   41443 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1202 12:20:55.541711   41443 command_runner.go:130] > # Where:
	I1202 12:20:55.541716   41443 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1202 12:20:55.541724   41443 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1202 12:20:55.541730   41443 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1202 12:20:55.541736   41443 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1202 12:20:55.541740   41443 command_runner.go:130] > #   in $PATH.
	I1202 12:20:55.541747   41443 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1202 12:20:55.541754   41443 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1202 12:20:55.541760   41443 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1202 12:20:55.541766   41443 command_runner.go:130] > #   state.
	I1202 12:20:55.541771   41443 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1202 12:20:55.541777   41443 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1202 12:20:55.541785   41443 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1202 12:20:55.541790   41443 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1202 12:20:55.541798   41443 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1202 12:20:55.541805   41443 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1202 12:20:55.541813   41443 command_runner.go:130] > #   The currently recognized values are:
	I1202 12:20:55.541819   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1202 12:20:55.541828   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1202 12:20:55.541836   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1202 12:20:55.541844   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1202 12:20:55.541851   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1202 12:20:55.541859   41443 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1202 12:20:55.541865   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1202 12:20:55.541873   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1202 12:20:55.541879   41443 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1202 12:20:55.541887   41443 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1202 12:20:55.541891   41443 command_runner.go:130] > #   deprecated option "conmon".
	I1202 12:20:55.541900   41443 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1202 12:20:55.541915   41443 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1202 12:20:55.541923   41443 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1202 12:20:55.541928   41443 command_runner.go:130] > #   should be moved to the container's cgroup
	I1202 12:20:55.541938   41443 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1202 12:20:55.541942   41443 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1202 12:20:55.541948   41443 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1202 12:20:55.541956   41443 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1202 12:20:55.541958   41443 command_runner.go:130] > #
	I1202 12:20:55.541963   41443 command_runner.go:130] > # Using the seccomp notifier feature:
	I1202 12:20:55.541968   41443 command_runner.go:130] > #
	I1202 12:20:55.541973   41443 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1202 12:20:55.541982   41443 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1202 12:20:55.541985   41443 command_runner.go:130] > #
	I1202 12:20:55.541991   41443 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1202 12:20:55.541999   41443 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1202 12:20:55.542002   41443 command_runner.go:130] > #
	I1202 12:20:55.542008   41443 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1202 12:20:55.542013   41443 command_runner.go:130] > # feature.
	I1202 12:20:55.542016   41443 command_runner.go:130] > #
	I1202 12:20:55.542022   41443 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1202 12:20:55.542030   41443 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1202 12:20:55.542036   41443 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1202 12:20:55.542046   41443 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1202 12:20:55.542052   41443 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1202 12:20:55.542056   41443 command_runner.go:130] > #
	I1202 12:20:55.542061   41443 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1202 12:20:55.542069   41443 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1202 12:20:55.542073   41443 command_runner.go:130] > #
	I1202 12:20:55.542078   41443 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1202 12:20:55.542085   41443 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1202 12:20:55.542088   41443 command_runner.go:130] > #
	I1202 12:20:55.542094   41443 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1202 12:20:55.542102   41443 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1202 12:20:55.542115   41443 command_runner.go:130] > # limitation.
	I1202 12:20:55.542122   41443 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1202 12:20:55.542126   41443 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1202 12:20:55.542129   41443 command_runner.go:130] > runtime_type = "oci"
	I1202 12:20:55.542133   41443 command_runner.go:130] > runtime_root = "/run/runc"
	I1202 12:20:55.542137   41443 command_runner.go:130] > runtime_config_path = ""
	I1202 12:20:55.542142   41443 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 12:20:55.542146   41443 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 12:20:55.542150   41443 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 12:20:55.542153   41443 command_runner.go:130] > monitor_env = [
	I1202 12:20:55.542159   41443 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1202 12:20:55.542163   41443 command_runner.go:130] > ]
	I1202 12:20:55.542168   41443 command_runner.go:130] > privileged_without_host_devices = false
	I1202 12:20:55.542176   41443 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1202 12:20:55.542181   41443 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1202 12:20:55.542188   41443 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1202 12:20:55.542195   41443 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1202 12:20:55.542204   41443 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1202 12:20:55.542212   41443 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1202 12:20:55.542220   41443 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1202 12:20:55.542230   41443 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1202 12:20:55.542235   41443 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1202 12:20:55.542242   41443 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1202 12:20:55.542244   41443 command_runner.go:130] > # Example:
	I1202 12:20:55.542249   41443 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1202 12:20:55.542252   41443 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1202 12:20:55.542259   41443 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1202 12:20:55.542263   41443 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1202 12:20:55.542266   41443 command_runner.go:130] > # cpuset = 0
	I1202 12:20:55.542269   41443 command_runner.go:130] > # cpushares = "0-1"
	I1202 12:20:55.542272   41443 command_runner.go:130] > # Where:
	I1202 12:20:55.542277   41443 command_runner.go:130] > # The workload name is workload-type.
	I1202 12:20:55.542283   41443 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1202 12:20:55.542292   41443 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1202 12:20:55.542296   41443 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1202 12:20:55.542303   41443 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1202 12:20:55.542308   41443 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1202 12:20:55.542312   41443 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1202 12:20:55.542318   41443 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1202 12:20:55.542321   41443 command_runner.go:130] > # Default value is set to true
	I1202 12:20:55.542325   41443 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1202 12:20:55.542330   41443 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1202 12:20:55.542334   41443 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1202 12:20:55.542338   41443 command_runner.go:130] > # Default value is set to 'false'
	I1202 12:20:55.542342   41443 command_runner.go:130] > # disable_hostport_mapping = false
	I1202 12:20:55.542353   41443 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1202 12:20:55.542356   41443 command_runner.go:130] > #
	I1202 12:20:55.542363   41443 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1202 12:20:55.542369   41443 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1202 12:20:55.542375   41443 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1202 12:20:55.542380   41443 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1202 12:20:55.542385   41443 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1202 12:20:55.542388   41443 command_runner.go:130] > [crio.image]
	I1202 12:20:55.542393   41443 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1202 12:20:55.542397   41443 command_runner.go:130] > # default_transport = "docker://"
	I1202 12:20:55.542402   41443 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1202 12:20:55.542408   41443 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1202 12:20:55.542411   41443 command_runner.go:130] > # global_auth_file = ""
	I1202 12:20:55.542416   41443 command_runner.go:130] > # The image used to instantiate infra containers.
	I1202 12:20:55.542423   41443 command_runner.go:130] > # This option supports live configuration reload.
	I1202 12:20:55.542427   41443 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1202 12:20:55.542433   41443 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1202 12:20:55.542441   41443 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1202 12:20:55.542449   41443 command_runner.go:130] > # This option supports live configuration reload.
	I1202 12:20:55.542455   41443 command_runner.go:130] > # pause_image_auth_file = ""
	I1202 12:20:55.542462   41443 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1202 12:20:55.542472   41443 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1202 12:20:55.542480   41443 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1202 12:20:55.542485   41443 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1202 12:20:55.542490   41443 command_runner.go:130] > # pause_command = "/pause"
	I1202 12:20:55.542496   41443 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1202 12:20:55.542509   41443 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1202 12:20:55.542514   41443 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1202 12:20:55.542522   41443 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1202 12:20:55.542527   41443 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1202 12:20:55.542535   41443 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1202 12:20:55.542539   41443 command_runner.go:130] > # pinned_images = [
	I1202 12:20:55.542544   41443 command_runner.go:130] > # ]
	I1202 12:20:55.542549   41443 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1202 12:20:55.542558   41443 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1202 12:20:55.542564   41443 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1202 12:20:55.542576   41443 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1202 12:20:55.542587   41443 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1202 12:20:55.542597   41443 command_runner.go:130] > # signature_policy = ""
	I1202 12:20:55.542605   41443 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1202 12:20:55.542618   41443 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1202 12:20:55.542631   41443 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1202 12:20:55.542643   41443 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1202 12:20:55.542655   41443 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1202 12:20:55.542665   41443 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1202 12:20:55.542673   41443 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1202 12:20:55.542681   41443 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1202 12:20:55.542685   41443 command_runner.go:130] > # changing them here.
	I1202 12:20:55.542691   41443 command_runner.go:130] > # insecure_registries = [
	I1202 12:20:55.542694   41443 command_runner.go:130] > # ]
	I1202 12:20:55.542700   41443 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1202 12:20:55.542707   41443 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1202 12:20:55.542711   41443 command_runner.go:130] > # image_volumes = "mkdir"
	I1202 12:20:55.542716   41443 command_runner.go:130] > # Temporary directory to use for storing big files
	I1202 12:20:55.542728   41443 command_runner.go:130] > # big_files_temporary_dir = ""
	I1202 12:20:55.542748   41443 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1202 12:20:55.542754   41443 command_runner.go:130] > # CNI plugins.
	I1202 12:20:55.542758   41443 command_runner.go:130] > [crio.network]
	I1202 12:20:55.542763   41443 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1202 12:20:55.542769   41443 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1202 12:20:55.542774   41443 command_runner.go:130] > # cni_default_network = ""
	I1202 12:20:55.542779   41443 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1202 12:20:55.542785   41443 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1202 12:20:55.542790   41443 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1202 12:20:55.542794   41443 command_runner.go:130] > # plugin_dirs = [
	I1202 12:20:55.542798   41443 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1202 12:20:55.542801   41443 command_runner.go:130] > # ]
	I1202 12:20:55.542807   41443 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1202 12:20:55.542811   41443 command_runner.go:130] > [crio.metrics]
	I1202 12:20:55.542816   41443 command_runner.go:130] > # Globally enable or disable metrics support.
	I1202 12:20:55.542820   41443 command_runner.go:130] > enable_metrics = true
	I1202 12:20:55.542825   41443 command_runner.go:130] > # Specify enabled metrics collectors.
	I1202 12:20:55.542832   41443 command_runner.go:130] > # Per default all metrics are enabled.
	I1202 12:20:55.542838   41443 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1202 12:20:55.542846   41443 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1202 12:20:55.542852   41443 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1202 12:20:55.542858   41443 command_runner.go:130] > # metrics_collectors = [
	I1202 12:20:55.542861   41443 command_runner.go:130] > # 	"operations",
	I1202 12:20:55.542865   41443 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1202 12:20:55.542870   41443 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1202 12:20:55.542874   41443 command_runner.go:130] > # 	"operations_errors",
	I1202 12:20:55.542878   41443 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1202 12:20:55.542882   41443 command_runner.go:130] > # 	"image_pulls_by_name",
	I1202 12:20:55.542886   41443 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1202 12:20:55.542890   41443 command_runner.go:130] > # 	"image_pulls_failures",
	I1202 12:20:55.542894   41443 command_runner.go:130] > # 	"image_pulls_successes",
	I1202 12:20:55.542898   41443 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1202 12:20:55.542906   41443 command_runner.go:130] > # 	"image_layer_reuse",
	I1202 12:20:55.542912   41443 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1202 12:20:55.542916   41443 command_runner.go:130] > # 	"containers_oom_total",
	I1202 12:20:55.542920   41443 command_runner.go:130] > # 	"containers_oom",
	I1202 12:20:55.542924   41443 command_runner.go:130] > # 	"processes_defunct",
	I1202 12:20:55.542928   41443 command_runner.go:130] > # 	"operations_total",
	I1202 12:20:55.542932   41443 command_runner.go:130] > # 	"operations_latency_seconds",
	I1202 12:20:55.542936   41443 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1202 12:20:55.542940   41443 command_runner.go:130] > # 	"operations_errors_total",
	I1202 12:20:55.542944   41443 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1202 12:20:55.542948   41443 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1202 12:20:55.542954   41443 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1202 12:20:55.542958   41443 command_runner.go:130] > # 	"image_pulls_success_total",
	I1202 12:20:55.542967   41443 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1202 12:20:55.542971   41443 command_runner.go:130] > # 	"containers_oom_count_total",
	I1202 12:20:55.542977   41443 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1202 12:20:55.542981   41443 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1202 12:20:55.542984   41443 command_runner.go:130] > # ]
	I1202 12:20:55.542989   41443 command_runner.go:130] > # The port on which the metrics server will listen.
	I1202 12:20:55.542994   41443 command_runner.go:130] > # metrics_port = 9090
	I1202 12:20:55.542998   41443 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1202 12:20:55.543004   41443 command_runner.go:130] > # metrics_socket = ""
	I1202 12:20:55.543009   41443 command_runner.go:130] > # The certificate for the secure metrics server.
	I1202 12:20:55.543014   41443 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1202 12:20:55.543021   41443 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1202 12:20:55.543025   41443 command_runner.go:130] > # certificate on any modification event.
	I1202 12:20:55.543030   41443 command_runner.go:130] > # metrics_cert = ""
	I1202 12:20:55.543035   41443 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1202 12:20:55.543042   41443 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1202 12:20:55.543046   41443 command_runner.go:130] > # metrics_key = ""
	I1202 12:20:55.543054   41443 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1202 12:20:55.543058   41443 command_runner.go:130] > [crio.tracing]
	I1202 12:20:55.543065   41443 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1202 12:20:55.543073   41443 command_runner.go:130] > # enable_tracing = false
	I1202 12:20:55.543081   41443 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1202 12:20:55.543085   41443 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1202 12:20:55.543091   41443 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1202 12:20:55.543098   41443 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1202 12:20:55.543101   41443 command_runner.go:130] > # CRI-O NRI configuration.
	I1202 12:20:55.543105   41443 command_runner.go:130] > [crio.nri]
	I1202 12:20:55.543113   41443 command_runner.go:130] > # Globally enable or disable NRI.
	I1202 12:20:55.543120   41443 command_runner.go:130] > # enable_nri = false
	I1202 12:20:55.543124   41443 command_runner.go:130] > # NRI socket to listen on.
	I1202 12:20:55.543128   41443 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1202 12:20:55.543132   41443 command_runner.go:130] > # NRI plugin directory to use.
	I1202 12:20:55.543137   41443 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1202 12:20:55.543143   41443 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1202 12:20:55.543147   41443 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1202 12:20:55.543153   41443 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1202 12:20:55.543157   41443 command_runner.go:130] > # nri_disable_connections = false
	I1202 12:20:55.543164   41443 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1202 12:20:55.543169   41443 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1202 12:20:55.543176   41443 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1202 12:20:55.543180   41443 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1202 12:20:55.543185   41443 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1202 12:20:55.543191   41443 command_runner.go:130] > [crio.stats]
	I1202 12:20:55.543200   41443 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1202 12:20:55.543207   41443 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1202 12:20:55.543211   41443 command_runner.go:130] > # stats_collection_period = 0
	I1202 12:20:55.543291   41443 cni.go:84] Creating CNI manager for ""
	I1202 12:20:55.543302   41443 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 12:20:55.543311   41443 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 12:20:55.543333   41443 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.135 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-191330 NodeName:multinode-191330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 12:20:55.543459   41443 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-191330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.135"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.135"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 12:20:55.543519   41443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 12:20:55.553911   41443 command_runner.go:130] > kubeadm
	I1202 12:20:55.553926   41443 command_runner.go:130] > kubectl
	I1202 12:20:55.553930   41443 command_runner.go:130] > kubelet
	I1202 12:20:55.553948   41443 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 12:20:55.553994   41443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 12:20:55.563366   41443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1202 12:20:55.579319   41443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 12:20:55.595269   41443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1202 12:20:55.611869   41443 ssh_runner.go:195] Run: grep 192.168.39.135	control-plane.minikube.internal$ /etc/hosts
	I1202 12:20:55.615648   41443 command_runner.go:130] > 192.168.39.135	control-plane.minikube.internal
	I1202 12:20:55.615856   41443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:20:55.756053   41443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:20:55.770815   41443 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330 for IP: 192.168.39.135
	I1202 12:20:55.770831   41443 certs.go:194] generating shared ca certs ...
	I1202 12:20:55.770845   41443 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:20:55.770970   41443 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 12:20:55.771013   41443 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 12:20:55.771022   41443 certs.go:256] generating profile certs ...
	I1202 12:20:55.771099   41443 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/client.key
	I1202 12:20:55.771161   41443 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/apiserver.key.cbfd379d
	I1202 12:20:55.771198   41443 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/proxy-client.key
	I1202 12:20:55.771208   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 12:20:55.771219   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 12:20:55.771232   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 12:20:55.771241   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 12:20:55.771258   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 12:20:55.771274   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 12:20:55.771286   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 12:20:55.771298   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 12:20:55.771343   41443 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 12:20:55.771383   41443 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 12:20:55.771392   41443 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 12:20:55.771415   41443 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 12:20:55.771439   41443 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 12:20:55.771459   41443 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 12:20:55.771498   41443 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:20:55.771522   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 12:20:55.771565   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:20:55.771580   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 12:20:55.772151   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 12:20:55.795885   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 12:20:55.842071   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 12:20:55.868301   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 12:20:55.894659   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 12:20:55.922601   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 12:20:55.948867   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 12:20:55.975123   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 12:20:55.998127   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 12:20:56.021914   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 12:20:56.044642   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 12:20:56.067558   41443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 12:20:56.083404   41443 ssh_runner.go:195] Run: openssl version
	I1202 12:20:56.089051   41443 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1202 12:20:56.089126   41443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 12:20:56.099631   41443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 12:20:56.103910   41443 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:20:56.104092   41443 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:20:56.104127   41443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 12:20:56.109547   41443 command_runner.go:130] > 51391683
	I1202 12:20:56.109776   41443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 12:20:56.118628   41443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 12:20:56.129157   41443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 12:20:56.133320   41443 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:20:56.133356   41443 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:20:56.133392   41443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 12:20:56.138787   41443 command_runner.go:130] > 3ec20f2e
	I1202 12:20:56.138844   41443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 12:20:56.147932   41443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 12:20:56.158232   41443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:20:56.162263   41443 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:20:56.162385   41443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:20:56.162420   41443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:20:56.167552   41443 command_runner.go:130] > b5213941
	I1202 12:20:56.167784   41443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 12:20:56.176489   41443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:20:56.180837   41443 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:20:56.180852   41443 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 12:20:56.180858   41443 command_runner.go:130] > Device: 253,1	Inode: 3150382     Links: 1
	I1202 12:20:56.180866   41443 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 12:20:56.180877   41443 command_runner.go:130] > Access: 2024-12-02 12:14:17.855935809 +0000
	I1202 12:20:56.180890   41443 command_runner.go:130] > Modify: 2024-12-02 12:14:17.855935809 +0000
	I1202 12:20:56.180898   41443 command_runner.go:130] > Change: 2024-12-02 12:14:17.855935809 +0000
	I1202 12:20:56.180907   41443 command_runner.go:130] >  Birth: 2024-12-02 12:14:17.855935809 +0000
	I1202 12:20:56.181027   41443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 12:20:56.186424   41443 command_runner.go:130] > Certificate will not expire
	I1202 12:20:56.186496   41443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 12:20:56.191814   41443 command_runner.go:130] > Certificate will not expire
	I1202 12:20:56.191848   41443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 12:20:56.196907   41443 command_runner.go:130] > Certificate will not expire
	I1202 12:20:56.197034   41443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 12:20:56.202125   41443 command_runner.go:130] > Certificate will not expire
	I1202 12:20:56.202319   41443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 12:20:56.207764   41443 command_runner.go:130] > Certificate will not expire
	I1202 12:20:56.208023   41443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 12:20:56.213375   41443 command_runner.go:130] > Certificate will not expire
	I1202 12:20:56.213439   41443 kubeadm.go:392] StartCluster: {Name:multinode-191330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-191330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:20:56.213551   41443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 12:20:56.213596   41443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:20:56.249377   41443 command_runner.go:130] > 033e9fdb82e3a453176eee4642267e4a17886d07c3831daab5ce66ef3578add8
	I1202 12:20:56.249395   41443 command_runner.go:130] > dc719184399ec6d98e14b0357d9c6ccb13904046023aae554d114e597606fdcd
	I1202 12:20:56.249401   41443 command_runner.go:130] > 79092084dc96f8588b9f2585e6e87ac26d73e584d9c2f6ecb36d4684787cb922
	I1202 12:20:56.249408   41443 command_runner.go:130] > 3720b0a3bc4d341bd1ad62ba26fc92aaf82f3292ab7a071010b806583f4fefe2
	I1202 12:20:56.249413   41443 command_runner.go:130] > 17338e5fa590eed42ecd771141d27b0642808ec6b373ca6b79282469cb80efab
	I1202 12:20:56.249418   41443 command_runner.go:130] > 578fa09f2e10474a35a428f12ab7c18b2f10f1622c251557f459dbe3b8c45e32
	I1202 12:20:56.249423   41443 command_runner.go:130] > 947ad842d5a96b79b1af99d79c6a81a4e264e14d2f878244a946aeec7e6716c0
	I1202 12:20:56.249430   41443 command_runner.go:130] > 3ceedb678ad61438963893a96dce32fb183748948869e8f30ce1161ff6d76fcc
	I1202 12:20:56.249447   41443 cri.go:89] found id: "033e9fdb82e3a453176eee4642267e4a17886d07c3831daab5ce66ef3578add8"
	I1202 12:20:56.249454   41443 cri.go:89] found id: "dc719184399ec6d98e14b0357d9c6ccb13904046023aae554d114e597606fdcd"
	I1202 12:20:56.249458   41443 cri.go:89] found id: "79092084dc96f8588b9f2585e6e87ac26d73e584d9c2f6ecb36d4684787cb922"
	I1202 12:20:56.249461   41443 cri.go:89] found id: "3720b0a3bc4d341bd1ad62ba26fc92aaf82f3292ab7a071010b806583f4fefe2"
	I1202 12:20:56.249463   41443 cri.go:89] found id: "17338e5fa590eed42ecd771141d27b0642808ec6b373ca6b79282469cb80efab"
	I1202 12:20:56.249467   41443 cri.go:89] found id: "578fa09f2e10474a35a428f12ab7c18b2f10f1622c251557f459dbe3b8c45e32"
	I1202 12:20:56.249470   41443 cri.go:89] found id: "947ad842d5a96b79b1af99d79c6a81a4e264e14d2f878244a946aeec7e6716c0"
	I1202 12:20:56.249472   41443 cri.go:89] found id: "3ceedb678ad61438963893a96dce32fb183748948869e8f30ce1161ff6d76fcc"
	I1202 12:20:56.249475   41443 cri.go:89] found id: ""
	I1202 12:20:56.249502   41443 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-191330 -n multinode-191330
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-191330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 stop
E1202 12:22:49.240423   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:23:04.444801   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-191330 stop: exit status 82 (2m0.445235255s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-191330-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-191330 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 status
E1202 12:25:01.370000   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-191330 status: (18.710003122s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-191330 status --alsologtostderr: (3.359704952s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-191330 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-191330 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-191330 -n multinode-191330
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-191330 logs -n 25: (1.935377811s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp multinode-191330-m02:/home/docker/cp-test.txt                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330:/home/docker/cp-test_multinode-191330-m02_multinode-191330.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n multinode-191330 sudo cat                                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /home/docker/cp-test_multinode-191330-m02_multinode-191330.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp multinode-191330-m02:/home/docker/cp-test.txt                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03:/home/docker/cp-test_multinode-191330-m02_multinode-191330-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n multinode-191330-m03 sudo cat                                   | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /home/docker/cp-test_multinode-191330-m02_multinode-191330-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp testdata/cp-test.txt                                                | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp multinode-191330-m03:/home/docker/cp-test.txt                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2126939927/001/cp-test_multinode-191330-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp multinode-191330-m03:/home/docker/cp-test.txt                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330:/home/docker/cp-test_multinode-191330-m03_multinode-191330.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n multinode-191330 sudo cat                                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /home/docker/cp-test_multinode-191330-m03_multinode-191330.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-191330 cp multinode-191330-m03:/home/docker/cp-test.txt                       | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m02:/home/docker/cp-test_multinode-191330-m03_multinode-191330-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n multinode-191330-m02 sudo cat                                   | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /home/docker/cp-test_multinode-191330-m03_multinode-191330-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-191330 node stop m03                                                          | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	| node    | multinode-191330 node start                                                             | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:17 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-191330                                                                | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:17 UTC |                     |
	| stop    | -p multinode-191330                                                                     | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:17 UTC |                     |
	| start   | -p multinode-191330                                                                     | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:19 UTC | 02 Dec 24 12:22 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-191330                                                                | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:22 UTC |                     |
	| node    | multinode-191330 node delete                                                            | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:22 UTC | 02 Dec 24 12:22 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-191330 stop                                                                   | multinode-191330 | jenkins | v1.34.0 | 02 Dec 24 12:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 12:19:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 12:19:22.211468   41443 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:19:22.211639   41443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:19:22.211667   41443 out.go:358] Setting ErrFile to fd 2...
	I1202 12:19:22.211690   41443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:19:22.212250   41443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:19:22.212773   41443 out.go:352] Setting JSON to false
	I1202 12:19:22.213620   41443 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3714,"bootTime":1733138248,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:19:22.213699   41443 start.go:139] virtualization: kvm guest
	I1202 12:19:22.219628   41443 out.go:177] * [multinode-191330] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:19:22.224413   41443 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:19:22.224410   41443 notify.go:220] Checking for updates...
	I1202 12:19:22.229662   41443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:19:22.231148   41443 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:19:22.232396   41443 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:19:22.233498   41443 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:19:22.234622   41443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:19:22.236088   41443 config.go:182] Loaded profile config "multinode-191330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:19:22.236176   41443 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:19:22.236616   41443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:19:22.236653   41443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:19:22.251258   41443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I1202 12:19:22.251676   41443 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:19:22.252181   41443 main.go:141] libmachine: Using API Version  1
	I1202 12:19:22.252203   41443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:19:22.252527   41443 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:19:22.252691   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:19:22.286024   41443 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 12:19:22.287195   41443 start.go:297] selected driver: kvm2
	I1202 12:19:22.287207   41443 start.go:901] validating driver "kvm2" against &{Name:multinode-191330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-191330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:19:22.287336   41443 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:19:22.287631   41443 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:19:22.287691   41443 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 12:19:22.301556   41443 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 12:19:22.302185   41443 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:19:22.302210   41443 cni.go:84] Creating CNI manager for ""
	I1202 12:19:22.302259   41443 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 12:19:22.302302   41443 start.go:340] cluster config:
	{Name:multinode-191330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-191330 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:19:22.302455   41443 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:19:22.303959   41443 out.go:177] * Starting "multinode-191330" primary control-plane node in "multinode-191330" cluster
	I1202 12:19:22.305053   41443 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:19:22.305084   41443 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 12:19:22.305093   41443 cache.go:56] Caching tarball of preloaded images
	I1202 12:19:22.305158   41443 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 12:19:22.305168   41443 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 12:19:22.305266   41443 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/config.json ...
	I1202 12:19:22.305432   41443 start.go:360] acquireMachinesLock for multinode-191330: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:19:22.305467   41443 start.go:364] duration metric: took 19.496µs to acquireMachinesLock for "multinode-191330"
	I1202 12:19:22.305483   41443 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:19:22.305491   41443 fix.go:54] fixHost starting: 
	I1202 12:19:22.305727   41443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:19:22.305752   41443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:19:22.318672   41443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I1202 12:19:22.319076   41443 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:19:22.319501   41443 main.go:141] libmachine: Using API Version  1
	I1202 12:19:22.319522   41443 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:19:22.319837   41443 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:19:22.319983   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:19:22.320099   41443 main.go:141] libmachine: (multinode-191330) Calling .GetState
	I1202 12:19:22.321348   41443 fix.go:112] recreateIfNeeded on multinode-191330: state=Running err=<nil>
	W1202 12:19:22.321363   41443 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:19:22.323192   41443 out.go:177] * Updating the running kvm2 "multinode-191330" VM ...
	I1202 12:19:22.324657   41443 machine.go:93] provisionDockerMachine start ...
	I1202 12:19:22.324673   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:19:22.324845   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:19:22.327116   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.327522   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.327549   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.327665   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:19:22.327864   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.328012   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.328142   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:19:22.328312   41443 main.go:141] libmachine: Using SSH client type: native
	I1202 12:19:22.328490   41443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1202 12:19:22.328502   41443 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:19:22.432919   41443 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-191330
	
	I1202 12:19:22.432951   41443 main.go:141] libmachine: (multinode-191330) Calling .GetMachineName
	I1202 12:19:22.433183   41443 buildroot.go:166] provisioning hostname "multinode-191330"
	I1202 12:19:22.433207   41443 main.go:141] libmachine: (multinode-191330) Calling .GetMachineName
	I1202 12:19:22.433406   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:19:22.435906   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.436287   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.436308   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.436458   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:19:22.436653   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.436829   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.436950   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:19:22.437116   41443 main.go:141] libmachine: Using SSH client type: native
	I1202 12:19:22.437265   41443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1202 12:19:22.437277   41443 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-191330 && echo "multinode-191330" | sudo tee /etc/hostname
	I1202 12:19:22.557004   41443 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-191330
	
	I1202 12:19:22.557038   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:19:22.559570   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.559935   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.559973   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.560104   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:19:22.560322   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.560462   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.560569   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:19:22.560703   41443 main.go:141] libmachine: Using SSH client type: native
	I1202 12:19:22.560860   41443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1202 12:19:22.560882   41443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-191330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-191330/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-191330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 12:19:22.664877   41443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:19:22.664902   41443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 12:19:22.664918   41443 buildroot.go:174] setting up certificates
	I1202 12:19:22.664927   41443 provision.go:84] configureAuth start
	I1202 12:19:22.664938   41443 main.go:141] libmachine: (multinode-191330) Calling .GetMachineName
	I1202 12:19:22.665221   41443 main.go:141] libmachine: (multinode-191330) Calling .GetIP
	I1202 12:19:22.667507   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.667852   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.667873   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.668037   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:19:22.669990   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.670317   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.670346   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.670482   41443 provision.go:143] copyHostCerts
	I1202 12:19:22.670509   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:19:22.670550   41443 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 12:19:22.670562   41443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:19:22.670624   41443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 12:19:22.670700   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:19:22.670718   41443 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 12:19:22.670722   41443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:19:22.670746   41443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 12:19:22.670786   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:19:22.670803   41443 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 12:19:22.670809   41443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:19:22.670828   41443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 12:19:22.670870   41443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.multinode-191330 san=[127.0.0.1 192.168.39.135 localhost minikube multinode-191330]
	I1202 12:19:22.808036   41443 provision.go:177] copyRemoteCerts
	I1202 12:19:22.808103   41443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 12:19:22.808130   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:19:22.810781   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.811162   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.811190   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.811329   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:19:22.811510   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.811680   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:19:22.811806   41443 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/multinode-191330/id_rsa Username:docker}
	I1202 12:19:22.894671   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 12:19:22.894725   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 12:19:22.921311   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 12:19:22.921368   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1202 12:19:22.947196   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 12:19:22.947247   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 12:19:22.970654   41443 provision.go:87] duration metric: took 305.7188ms to configureAuth
	I1202 12:19:22.970673   41443 buildroot.go:189] setting minikube options for container-runtime
	I1202 12:19:22.970936   41443 config.go:182] Loaded profile config "multinode-191330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:19:22.971022   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:19:22.973563   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.973875   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:19:22.973899   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:19:22.974154   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:19:22.974323   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.974492   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:19:22.974608   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:19:22.974745   41443 main.go:141] libmachine: Using SSH client type: native
	I1202 12:19:22.974889   41443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1202 12:19:22.974903   41443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 12:20:53.628037   41443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 12:20:53.628066   41443 machine.go:96] duration metric: took 1m31.303395112s to provisionDockerMachine
	I1202 12:20:53.628083   41443 start.go:293] postStartSetup for "multinode-191330" (driver="kvm2")
	I1202 12:20:53.628098   41443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 12:20:53.628120   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:20:53.628449   41443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 12:20:53.628486   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:20:53.631447   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.631888   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:20:53.631930   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.632086   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:20:53.632275   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:20:53.632445   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:20:53.632617   41443 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/multinode-191330/id_rsa Username:docker}
	I1202 12:20:53.715084   41443 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 12:20:53.719272   41443 command_runner.go:130] > NAME=Buildroot
	I1202 12:20:53.719287   41443 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1202 12:20:53.719290   41443 command_runner.go:130] > ID=buildroot
	I1202 12:20:53.719297   41443 command_runner.go:130] > VERSION_ID=2023.02.9
	I1202 12:20:53.719318   41443 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1202 12:20:53.719648   41443 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 12:20:53.719664   41443 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 12:20:53.719714   41443 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 12:20:53.719790   41443 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 12:20:53.719801   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /etc/ssl/certs/134162.pem
	I1202 12:20:53.719877   41443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 12:20:53.728594   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:20:53.751723   41443 start.go:296] duration metric: took 123.628453ms for postStartSetup
	I1202 12:20:53.751757   41443 fix.go:56] duration metric: took 1m31.446265018s for fixHost
	I1202 12:20:53.751778   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:20:53.754511   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.754860   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:20:53.754886   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.755034   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:20:53.755201   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:20:53.755358   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:20:53.755450   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:20:53.755599   41443 main.go:141] libmachine: Using SSH client type: native
	I1202 12:20:53.755864   41443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I1202 12:20:53.755884   41443 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 12:20:53.856482   41443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733142053.832598558
	
	I1202 12:20:53.856505   41443 fix.go:216] guest clock: 1733142053.832598558
	I1202 12:20:53.856513   41443 fix.go:229] Guest: 2024-12-02 12:20:53.832598558 +0000 UTC Remote: 2024-12-02 12:20:53.75176176 +0000 UTC m=+91.575535379 (delta=80.836798ms)
	I1202 12:20:53.856537   41443 fix.go:200] guest clock delta is within tolerance: 80.836798ms
	I1202 12:20:53.856546   41443 start.go:83] releasing machines lock for "multinode-191330", held for 1m31.551069663s
	I1202 12:20:53.856569   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:20:53.856777   41443 main.go:141] libmachine: (multinode-191330) Calling .GetIP
	I1202 12:20:53.859201   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.859487   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:20:53.859513   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.859668   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:20:53.860173   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:20:53.860351   41443 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:20:53.860459   41443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 12:20:53.860497   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:20:53.860558   41443 ssh_runner.go:195] Run: cat /version.json
	I1202 12:20:53.860583   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:20:53.863150   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.863168   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.863560   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:20:53.863583   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.863608   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:20:53.863625   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:53.863744   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:20:53.863884   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:20:53.863906   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:20:53.864025   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:20:53.864048   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:20:53.864156   41443 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/multinode-191330/id_rsa Username:docker}
	I1202 12:20:53.864293   41443 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:20:53.864452   41443 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/multinode-191330/id_rsa Username:docker}
	I1202 12:20:53.960506   41443 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 12:20:53.960555   41443 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1202 12:20:53.960689   41443 ssh_runner.go:195] Run: systemctl --version
	I1202 12:20:53.966431   41443 command_runner.go:130] > systemd 252 (252)
	I1202 12:20:53.966461   41443 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1202 12:20:53.966510   41443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 12:20:54.127850   41443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 12:20:54.133681   41443 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 12:20:54.133884   41443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 12:20:54.133938   41443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 12:20:54.143488   41443 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 12:20:54.143507   41443 start.go:495] detecting cgroup driver to use...
	I1202 12:20:54.143561   41443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 12:20:54.162989   41443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 12:20:54.176501   41443 docker.go:217] disabling cri-docker service (if available) ...
	I1202 12:20:54.176558   41443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 12:20:54.189549   41443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 12:20:54.203799   41443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 12:20:54.361987   41443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 12:20:54.509709   41443 docker.go:233] disabling docker service ...
	I1202 12:20:54.509800   41443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 12:20:54.526356   41443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 12:20:54.539835   41443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 12:20:54.680464   41443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 12:20:54.820038   41443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 12:20:54.834659   41443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 12:20:54.852642   41443 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1202 12:20:54.853254   41443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 12:20:54.853310   41443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.863564   41443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 12:20:54.863616   41443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.874918   41443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.886024   41443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.896792   41443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 12:20:54.907484   41443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.918243   41443 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.928880   41443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:20:54.939068   41443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 12:20:54.948196   41443 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 12:20:54.948257   41443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 12:20:54.957308   41443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:20:55.093641   41443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 12:20:55.287478   41443 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 12:20:55.287552   41443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 12:20:55.292589   41443 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1202 12:20:55.292609   41443 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 12:20:55.292615   41443 command_runner.go:130] > Device: 0,22	Inode: 1278        Links: 1
	I1202 12:20:55.292622   41443 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 12:20:55.292627   41443 command_runner.go:130] > Access: 2024-12-02 12:20:55.157624467 +0000
	I1202 12:20:55.292632   41443 command_runner.go:130] > Modify: 2024-12-02 12:20:55.157624467 +0000
	I1202 12:20:55.292637   41443 command_runner.go:130] > Change: 2024-12-02 12:20:55.157624467 +0000
	I1202 12:20:55.292640   41443 command_runner.go:130] >  Birth: -
	I1202 12:20:55.292827   41443 start.go:563] Will wait 60s for crictl version
	I1202 12:20:55.292890   41443 ssh_runner.go:195] Run: which crictl
	I1202 12:20:55.296865   41443 command_runner.go:130] > /usr/bin/crictl
	I1202 12:20:55.296907   41443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 12:20:55.334618   41443 command_runner.go:130] > Version:  0.1.0
	I1202 12:20:55.334634   41443 command_runner.go:130] > RuntimeName:  cri-o
	I1202 12:20:55.334639   41443 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1202 12:20:55.334644   41443 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 12:20:55.335667   41443 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 12:20:55.335735   41443 ssh_runner.go:195] Run: crio --version
	I1202 12:20:55.363933   41443 command_runner.go:130] > crio version 1.29.1
	I1202 12:20:55.363953   41443 command_runner.go:130] > Version:        1.29.1
	I1202 12:20:55.363961   41443 command_runner.go:130] > GitCommit:      unknown
	I1202 12:20:55.363976   41443 command_runner.go:130] > GitCommitDate:  unknown
	I1202 12:20:55.363983   41443 command_runner.go:130] > GitTreeState:   clean
	I1202 12:20:55.363992   41443 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1202 12:20:55.363996   41443 command_runner.go:130] > GoVersion:      go1.21.6
	I1202 12:20:55.364000   41443 command_runner.go:130] > Compiler:       gc
	I1202 12:20:55.364008   41443 command_runner.go:130] > Platform:       linux/amd64
	I1202 12:20:55.364011   41443 command_runner.go:130] > Linkmode:       dynamic
	I1202 12:20:55.364020   41443 command_runner.go:130] > BuildTags:      
	I1202 12:20:55.364028   41443 command_runner.go:130] >   containers_image_ostree_stub
	I1202 12:20:55.364032   41443 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1202 12:20:55.364036   41443 command_runner.go:130] >   btrfs_noversion
	I1202 12:20:55.364040   41443 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1202 12:20:55.364049   41443 command_runner.go:130] >   libdm_no_deferred_remove
	I1202 12:20:55.364052   41443 command_runner.go:130] >   seccomp
	I1202 12:20:55.364056   41443 command_runner.go:130] > LDFlags:          unknown
	I1202 12:20:55.364061   41443 command_runner.go:130] > SeccompEnabled:   true
	I1202 12:20:55.364065   41443 command_runner.go:130] > AppArmorEnabled:  false
	I1202 12:20:55.364126   41443 ssh_runner.go:195] Run: crio --version
	I1202 12:20:55.393458   41443 command_runner.go:130] > crio version 1.29.1
	I1202 12:20:55.393476   41443 command_runner.go:130] > Version:        1.29.1
	I1202 12:20:55.393484   41443 command_runner.go:130] > GitCommit:      unknown
	I1202 12:20:55.393490   41443 command_runner.go:130] > GitCommitDate:  unknown
	I1202 12:20:55.393496   41443 command_runner.go:130] > GitTreeState:   clean
	I1202 12:20:55.393504   41443 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1202 12:20:55.393510   41443 command_runner.go:130] > GoVersion:      go1.21.6
	I1202 12:20:55.393518   41443 command_runner.go:130] > Compiler:       gc
	I1202 12:20:55.393529   41443 command_runner.go:130] > Platform:       linux/amd64
	I1202 12:20:55.393540   41443 command_runner.go:130] > Linkmode:       dynamic
	I1202 12:20:55.393550   41443 command_runner.go:130] > BuildTags:      
	I1202 12:20:55.393559   41443 command_runner.go:130] >   containers_image_ostree_stub
	I1202 12:20:55.393567   41443 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1202 12:20:55.393574   41443 command_runner.go:130] >   btrfs_noversion
	I1202 12:20:55.393583   41443 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1202 12:20:55.393594   41443 command_runner.go:130] >   libdm_no_deferred_remove
	I1202 12:20:55.393600   41443 command_runner.go:130] >   seccomp
	I1202 12:20:55.393609   41443 command_runner.go:130] > LDFlags:          unknown
	I1202 12:20:55.393619   41443 command_runner.go:130] > SeccompEnabled:   true
	I1202 12:20:55.393627   41443 command_runner.go:130] > AppArmorEnabled:  false
	I1202 12:20:55.396140   41443 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 12:20:55.397217   41443 main.go:141] libmachine: (multinode-191330) Calling .GetIP
	I1202 12:20:55.400029   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:55.400443   41443 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:20:55.400466   41443 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:20:55.400636   41443 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 12:20:55.404813   41443 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1202 12:20:55.404972   41443 kubeadm.go:883] updating cluster {Name:multinode-191330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-191330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 12:20:55.405098   41443 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:20:55.405153   41443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:20:55.448656   41443 command_runner.go:130] > {
	I1202 12:20:55.448679   41443 command_runner.go:130] >   "images": [
	I1202 12:20:55.448686   41443 command_runner.go:130] >     {
	I1202 12:20:55.448696   41443 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1202 12:20:55.448701   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.448707   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1202 12:20:55.448711   41443 command_runner.go:130] >       ],
	I1202 12:20:55.448716   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.448724   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1202 12:20:55.448733   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1202 12:20:55.448737   41443 command_runner.go:130] >       ],
	I1202 12:20:55.448742   41443 command_runner.go:130] >       "size": "94965812",
	I1202 12:20:55.448750   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.448757   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.448768   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.448777   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.448785   41443 command_runner.go:130] >     },
	I1202 12:20:55.448789   41443 command_runner.go:130] >     {
	I1202 12:20:55.448798   41443 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1202 12:20:55.448802   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.448808   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1202 12:20:55.448812   41443 command_runner.go:130] >       ],
	I1202 12:20:55.448816   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.448822   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1202 12:20:55.448836   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1202 12:20:55.448843   41443 command_runner.go:130] >       ],
	I1202 12:20:55.448849   41443 command_runner.go:130] >       "size": "94958644",
	I1202 12:20:55.448856   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.448872   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.448881   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.448888   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.448896   41443 command_runner.go:130] >     },
	I1202 12:20:55.448905   41443 command_runner.go:130] >     {
	I1202 12:20:55.448914   41443 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1202 12:20:55.448923   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.448936   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1202 12:20:55.448944   41443 command_runner.go:130] >       ],
	I1202 12:20:55.448951   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.448966   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1202 12:20:55.448981   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1202 12:20:55.448989   41443 command_runner.go:130] >       ],
	I1202 12:20:55.448994   41443 command_runner.go:130] >       "size": "1363676",
	I1202 12:20:55.449001   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.449008   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.449018   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449027   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449035   41443 command_runner.go:130] >     },
	I1202 12:20:55.449040   41443 command_runner.go:130] >     {
	I1202 12:20:55.449052   41443 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1202 12:20:55.449062   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449073   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 12:20:55.449080   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449085   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449101   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1202 12:20:55.449128   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1202 12:20:55.449138   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449145   41443 command_runner.go:130] >       "size": "31470524",
	I1202 12:20:55.449155   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.449164   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.449173   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449180   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449189   41443 command_runner.go:130] >     },
	I1202 12:20:55.449198   41443 command_runner.go:130] >     {
	I1202 12:20:55.449211   41443 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1202 12:20:55.449220   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449237   41443 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1202 12:20:55.449245   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449250   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449263   41443 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1202 12:20:55.449275   41443 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1202 12:20:55.449284   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449290   41443 command_runner.go:130] >       "size": "63273227",
	I1202 12:20:55.449300   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.449307   41443 command_runner.go:130] >       "username": "nonroot",
	I1202 12:20:55.449317   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449326   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449331   41443 command_runner.go:130] >     },
	I1202 12:20:55.449338   41443 command_runner.go:130] >     {
	I1202 12:20:55.449351   41443 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1202 12:20:55.449368   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449379   41443 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1202 12:20:55.449388   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449397   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449411   41443 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1202 12:20:55.449423   41443 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1202 12:20:55.449429   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449436   41443 command_runner.go:130] >       "size": "149009664",
	I1202 12:20:55.449447   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.449457   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.449465   41443 command_runner.go:130] >       },
	I1202 12:20:55.449472   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.449481   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449491   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449497   41443 command_runner.go:130] >     },
	I1202 12:20:55.449505   41443 command_runner.go:130] >     {
	I1202 12:20:55.449512   41443 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1202 12:20:55.449520   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449528   41443 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1202 12:20:55.449543   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449553   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449568   41443 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1202 12:20:55.449581   41443 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1202 12:20:55.449589   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449594   41443 command_runner.go:130] >       "size": "95274464",
	I1202 12:20:55.449599   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.449604   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.449614   41443 command_runner.go:130] >       },
	I1202 12:20:55.449624   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.449634   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449643   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449651   41443 command_runner.go:130] >     },
	I1202 12:20:55.449656   41443 command_runner.go:130] >     {
	I1202 12:20:55.449670   41443 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1202 12:20:55.449678   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449684   41443 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1202 12:20:55.449698   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449710   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449739   41443 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1202 12:20:55.449753   41443 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1202 12:20:55.449762   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449766   41443 command_runner.go:130] >       "size": "89474374",
	I1202 12:20:55.449769   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.449778   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.449784   41443 command_runner.go:130] >       },
	I1202 12:20:55.449794   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.449803   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449809   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449815   41443 command_runner.go:130] >     },
	I1202 12:20:55.449820   41443 command_runner.go:130] >     {
	I1202 12:20:55.449830   41443 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1202 12:20:55.449836   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449851   41443 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1202 12:20:55.449855   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449858   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449869   41443 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1202 12:20:55.449881   41443 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1202 12:20:55.449887   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449894   41443 command_runner.go:130] >       "size": "92783513",
	I1202 12:20:55.449906   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.449913   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.449919   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.449925   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.449929   41443 command_runner.go:130] >     },
	I1202 12:20:55.449933   41443 command_runner.go:130] >     {
	I1202 12:20:55.449943   41443 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1202 12:20:55.449953   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.449961   41443 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1202 12:20:55.449969   41443 command_runner.go:130] >       ],
	I1202 12:20:55.449975   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.449986   41443 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1202 12:20:55.449997   41443 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1202 12:20:55.450006   41443 command_runner.go:130] >       ],
	I1202 12:20:55.450012   41443 command_runner.go:130] >       "size": "68457798",
	I1202 12:20:55.450017   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.450024   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.450029   41443 command_runner.go:130] >       },
	I1202 12:20:55.450035   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.450044   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.450051   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.450057   41443 command_runner.go:130] >     },
	I1202 12:20:55.450063   41443 command_runner.go:130] >     {
	I1202 12:20:55.450072   41443 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1202 12:20:55.450084   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.450092   41443 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1202 12:20:55.450105   41443 command_runner.go:130] >       ],
	I1202 12:20:55.450111   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.450124   41443 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1202 12:20:55.450139   41443 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1202 12:20:55.450146   41443 command_runner.go:130] >       ],
	I1202 12:20:55.450151   41443 command_runner.go:130] >       "size": "742080",
	I1202 12:20:55.450156   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.450165   41443 command_runner.go:130] >         "value": "65535"
	I1202 12:20:55.450175   41443 command_runner.go:130] >       },
	I1202 12:20:55.450184   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.450191   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.450201   41443 command_runner.go:130] >       "pinned": true
	I1202 12:20:55.450206   41443 command_runner.go:130] >     }
	I1202 12:20:55.450210   41443 command_runner.go:130] >   ]
	I1202 12:20:55.450219   41443 command_runner.go:130] > }
	I1202 12:20:55.450450   41443 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 12:20:55.450463   41443 crio.go:433] Images already preloaded, skipping extraction
	I1202 12:20:55.450504   41443 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:20:55.486596   41443 command_runner.go:130] > {
	I1202 12:20:55.486608   41443 command_runner.go:130] >   "images": [
	I1202 12:20:55.486612   41443 command_runner.go:130] >     {
	I1202 12:20:55.486620   41443 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1202 12:20:55.486624   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.486635   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1202 12:20:55.486638   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486643   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.486654   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1202 12:20:55.486662   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1202 12:20:55.486665   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486670   41443 command_runner.go:130] >       "size": "94965812",
	I1202 12:20:55.486675   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.486678   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.486685   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.486690   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.486692   41443 command_runner.go:130] >     },
	I1202 12:20:55.486696   41443 command_runner.go:130] >     {
	I1202 12:20:55.486702   41443 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1202 12:20:55.486709   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.486714   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1202 12:20:55.486717   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486722   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.486732   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1202 12:20:55.486744   41443 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1202 12:20:55.486753   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486760   41443 command_runner.go:130] >       "size": "94958644",
	I1202 12:20:55.486766   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.486777   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.486786   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.486793   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.486807   41443 command_runner.go:130] >     },
	I1202 12:20:55.486815   41443 command_runner.go:130] >     {
	I1202 12:20:55.486821   41443 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1202 12:20:55.486824   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.486833   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1202 12:20:55.486842   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486849   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.486864   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1202 12:20:55.486876   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1202 12:20:55.486880   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486884   41443 command_runner.go:130] >       "size": "1363676",
	I1202 12:20:55.486889   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.486893   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.486901   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.486908   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.486911   41443 command_runner.go:130] >     },
	I1202 12:20:55.486916   41443 command_runner.go:130] >     {
	I1202 12:20:55.486928   41443 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1202 12:20:55.486939   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.486953   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 12:20:55.486962   41443 command_runner.go:130] >       ],
	I1202 12:20:55.486971   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.486984   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1202 12:20:55.486999   41443 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1202 12:20:55.487008   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487019   41443 command_runner.go:130] >       "size": "31470524",
	I1202 12:20:55.487029   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.487038   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.487045   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.487051   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.487059   41443 command_runner.go:130] >     },
	I1202 12:20:55.487065   41443 command_runner.go:130] >     {
	I1202 12:20:55.487077   41443 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1202 12:20:55.487089   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.487102   41443 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1202 12:20:55.487112   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487118   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.487133   41443 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1202 12:20:55.487148   41443 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1202 12:20:55.487157   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487166   41443 command_runner.go:130] >       "size": "63273227",
	I1202 12:20:55.487175   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.487182   41443 command_runner.go:130] >       "username": "nonroot",
	I1202 12:20:55.487193   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.487202   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.487211   41443 command_runner.go:130] >     },
	I1202 12:20:55.487219   41443 command_runner.go:130] >     {
	I1202 12:20:55.487231   41443 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1202 12:20:55.487240   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.487250   41443 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1202 12:20:55.487256   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487262   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.487277   41443 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1202 12:20:55.487292   41443 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1202 12:20:55.487301   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487311   41443 command_runner.go:130] >       "size": "149009664",
	I1202 12:20:55.487320   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.487330   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.487339   41443 command_runner.go:130] >       },
	I1202 12:20:55.487347   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.487353   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.487373   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.487379   41443 command_runner.go:130] >     },
	I1202 12:20:55.487384   41443 command_runner.go:130] >     {
	I1202 12:20:55.487396   41443 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1202 12:20:55.487405   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.487425   41443 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1202 12:20:55.487434   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487440   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.487456   41443 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1202 12:20:55.487471   41443 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1202 12:20:55.487479   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487489   41443 command_runner.go:130] >       "size": "95274464",
	I1202 12:20:55.487498   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.487507   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.487513   41443 command_runner.go:130] >       },
	I1202 12:20:55.487518   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.487527   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.487538   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.487547   41443 command_runner.go:130] >     },
	I1202 12:20:55.487555   41443 command_runner.go:130] >     {
	I1202 12:20:55.487568   41443 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1202 12:20:55.487577   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.487585   41443 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1202 12:20:55.487593   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487597   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.487624   41443 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1202 12:20:55.487640   41443 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1202 12:20:55.487646   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487653   41443 command_runner.go:130] >       "size": "89474374",
	I1202 12:20:55.487662   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.487670   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.487679   41443 command_runner.go:130] >       },
	I1202 12:20:55.487683   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.487690   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.487697   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.487706   41443 command_runner.go:130] >     },
	I1202 12:20:55.487716   41443 command_runner.go:130] >     {
	I1202 12:20:55.487729   41443 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1202 12:20:55.487744   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.487755   41443 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1202 12:20:55.487763   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487767   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.487779   41443 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1202 12:20:55.487798   41443 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1202 12:20:55.487807   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487817   41443 command_runner.go:130] >       "size": "92783513",
	I1202 12:20:55.487826   41443 command_runner.go:130] >       "uid": null,
	I1202 12:20:55.487835   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.487845   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.487853   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.487859   41443 command_runner.go:130] >     },
	I1202 12:20:55.487863   41443 command_runner.go:130] >     {
	I1202 12:20:55.487877   41443 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1202 12:20:55.487887   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.487898   41443 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1202 12:20:55.487907   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487916   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.487931   41443 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1202 12:20:55.487942   41443 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1202 12:20:55.487949   41443 command_runner.go:130] >       ],
	I1202 12:20:55.487956   41443 command_runner.go:130] >       "size": "68457798",
	I1202 12:20:55.487966   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.487975   41443 command_runner.go:130] >         "value": "0"
	I1202 12:20:55.487984   41443 command_runner.go:130] >       },
	I1202 12:20:55.487993   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.488001   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.488008   41443 command_runner.go:130] >       "pinned": false
	I1202 12:20:55.488017   41443 command_runner.go:130] >     },
	I1202 12:20:55.488024   41443 command_runner.go:130] >     {
	I1202 12:20:55.488030   41443 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1202 12:20:55.488039   41443 command_runner.go:130] >       "repoTags": [
	I1202 12:20:55.488056   41443 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1202 12:20:55.488065   41443 command_runner.go:130] >       ],
	I1202 12:20:55.488075   41443 command_runner.go:130] >       "repoDigests": [
	I1202 12:20:55.488089   41443 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1202 12:20:55.488103   41443 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1202 12:20:55.488111   41443 command_runner.go:130] >       ],
	I1202 12:20:55.488115   41443 command_runner.go:130] >       "size": "742080",
	I1202 12:20:55.488119   41443 command_runner.go:130] >       "uid": {
	I1202 12:20:55.488129   41443 command_runner.go:130] >         "value": "65535"
	I1202 12:20:55.488138   41443 command_runner.go:130] >       },
	I1202 12:20:55.488148   41443 command_runner.go:130] >       "username": "",
	I1202 12:20:55.488157   41443 command_runner.go:130] >       "spec": null,
	I1202 12:20:55.488166   41443 command_runner.go:130] >       "pinned": true
	I1202 12:20:55.488174   41443 command_runner.go:130] >     }
	I1202 12:20:55.488183   41443 command_runner.go:130] >   ]
	I1202 12:20:55.488188   41443 command_runner.go:130] > }
	I1202 12:20:55.488379   41443 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 12:20:55.488394   41443 cache_images.go:84] Images are preloaded, skipping loading
	I1202 12:20:55.488403   41443 kubeadm.go:934] updating node { 192.168.39.135 8443 v1.31.2 crio true true} ...
	I1202 12:20:55.488521   41443 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-191330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-191330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 12:20:55.488609   41443 ssh_runner.go:195] Run: crio config
	I1202 12:20:55.521071   41443 command_runner.go:130] ! time="2024-12-02 12:20:55.497300210Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1202 12:20:55.532339   41443 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1202 12:20:55.539146   41443 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1202 12:20:55.539169   41443 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1202 12:20:55.539179   41443 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1202 12:20:55.539184   41443 command_runner.go:130] > #
	I1202 12:20:55.539205   41443 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1202 12:20:55.539221   41443 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1202 12:20:55.539231   41443 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1202 12:20:55.539248   41443 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1202 12:20:55.539258   41443 command_runner.go:130] > # reload'.
	I1202 12:20:55.539270   41443 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1202 12:20:55.539280   41443 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1202 12:20:55.539288   41443 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1202 12:20:55.539299   41443 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1202 12:20:55.539306   41443 command_runner.go:130] > [crio]
	I1202 12:20:55.539311   41443 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1202 12:20:55.539318   41443 command_runner.go:130] > # containers images, in this directory.
	I1202 12:20:55.539325   41443 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1202 12:20:55.539336   41443 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1202 12:20:55.539343   41443 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1202 12:20:55.539350   41443 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1202 12:20:55.539356   41443 command_runner.go:130] > # imagestore = ""
	I1202 12:20:55.539364   41443 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1202 12:20:55.539372   41443 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1202 12:20:55.539379   41443 command_runner.go:130] > storage_driver = "overlay"
	I1202 12:20:55.539384   41443 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1202 12:20:55.539394   41443 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1202 12:20:55.539400   41443 command_runner.go:130] > storage_option = [
	I1202 12:20:55.539405   41443 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1202 12:20:55.539410   41443 command_runner.go:130] > ]
	I1202 12:20:55.539417   41443 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1202 12:20:55.539425   41443 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1202 12:20:55.539432   41443 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1202 12:20:55.539437   41443 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1202 12:20:55.539445   41443 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1202 12:20:55.539452   41443 command_runner.go:130] > # always happen on a node reboot
	I1202 12:20:55.539457   41443 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1202 12:20:55.539470   41443 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1202 12:20:55.539478   41443 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1202 12:20:55.539483   41443 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1202 12:20:55.539490   41443 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1202 12:20:55.539497   41443 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1202 12:20:55.539506   41443 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1202 12:20:55.539512   41443 command_runner.go:130] > # internal_wipe = true
	I1202 12:20:55.539519   41443 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1202 12:20:55.539527   41443 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1202 12:20:55.539541   41443 command_runner.go:130] > # internal_repair = false
	I1202 12:20:55.539549   41443 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1202 12:20:55.539557   41443 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1202 12:20:55.539566   41443 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1202 12:20:55.539578   41443 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1202 12:20:55.539590   41443 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1202 12:20:55.539599   41443 command_runner.go:130] > [crio.api]
	I1202 12:20:55.539611   41443 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1202 12:20:55.539622   41443 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1202 12:20:55.539633   41443 command_runner.go:130] > # IP address on which the stream server will listen.
	I1202 12:20:55.539643   41443 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1202 12:20:55.539653   41443 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1202 12:20:55.539660   41443 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1202 12:20:55.539664   41443 command_runner.go:130] > # stream_port = "0"
	I1202 12:20:55.539669   41443 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1202 12:20:55.539676   41443 command_runner.go:130] > # stream_enable_tls = false
	I1202 12:20:55.539682   41443 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1202 12:20:55.539688   41443 command_runner.go:130] > # stream_idle_timeout = ""
	I1202 12:20:55.539697   41443 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1202 12:20:55.539705   41443 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1202 12:20:55.539710   41443 command_runner.go:130] > # minutes.
	I1202 12:20:55.539714   41443 command_runner.go:130] > # stream_tls_cert = ""
	I1202 12:20:55.539722   41443 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1202 12:20:55.539728   41443 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1202 12:20:55.539734   41443 command_runner.go:130] > # stream_tls_key = ""
	I1202 12:20:55.539740   41443 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1202 12:20:55.539748   41443 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1202 12:20:55.539764   41443 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1202 12:20:55.539771   41443 command_runner.go:130] > # stream_tls_ca = ""
	I1202 12:20:55.539778   41443 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 12:20:55.539785   41443 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1202 12:20:55.539791   41443 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1202 12:20:55.539798   41443 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1202 12:20:55.539808   41443 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1202 12:20:55.539816   41443 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1202 12:20:55.539823   41443 command_runner.go:130] > [crio.runtime]
	I1202 12:20:55.539828   41443 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1202 12:20:55.539836   41443 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1202 12:20:55.539840   41443 command_runner.go:130] > # "nofile=1024:2048"
	I1202 12:20:55.539848   41443 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1202 12:20:55.539851   41443 command_runner.go:130] > # default_ulimits = [
	I1202 12:20:55.539857   41443 command_runner.go:130] > # ]
	I1202 12:20:55.539862   41443 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1202 12:20:55.539868   41443 command_runner.go:130] > # no_pivot = false
	I1202 12:20:55.539874   41443 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1202 12:20:55.539886   41443 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1202 12:20:55.539894   41443 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1202 12:20:55.539899   41443 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1202 12:20:55.539906   41443 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1202 12:20:55.539912   41443 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 12:20:55.539919   41443 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1202 12:20:55.539923   41443 command_runner.go:130] > # Cgroup setting for conmon
	I1202 12:20:55.539931   41443 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1202 12:20:55.539937   41443 command_runner.go:130] > conmon_cgroup = "pod"
	I1202 12:20:55.539943   41443 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1202 12:20:55.539950   41443 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1202 12:20:55.539959   41443 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1202 12:20:55.539965   41443 command_runner.go:130] > conmon_env = [
	I1202 12:20:55.539970   41443 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1202 12:20:55.539975   41443 command_runner.go:130] > ]
	I1202 12:20:55.539990   41443 command_runner.go:130] > # Additional environment variables to set for all the
	I1202 12:20:55.539998   41443 command_runner.go:130] > # containers. These are overridden if set in the
	I1202 12:20:55.540003   41443 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1202 12:20:55.540009   41443 command_runner.go:130] > # default_env = [
	I1202 12:20:55.540013   41443 command_runner.go:130] > # ]
	I1202 12:20:55.540021   41443 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1202 12:20:55.540035   41443 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1202 12:20:55.540041   41443 command_runner.go:130] > # selinux = false
	I1202 12:20:55.540047   41443 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1202 12:20:55.540055   41443 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1202 12:20:55.540063   41443 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1202 12:20:55.540067   41443 command_runner.go:130] > # seccomp_profile = ""
	I1202 12:20:55.540074   41443 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1202 12:20:55.540080   41443 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1202 12:20:55.540087   41443 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1202 12:20:55.540091   41443 command_runner.go:130] > # which might increase security.
	I1202 12:20:55.540098   41443 command_runner.go:130] > # This option is currently deprecated,
	I1202 12:20:55.540104   41443 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1202 12:20:55.540115   41443 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1202 12:20:55.540123   41443 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1202 12:20:55.540131   41443 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1202 12:20:55.540137   41443 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1202 12:20:55.540145   41443 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1202 12:20:55.540151   41443 command_runner.go:130] > # This option supports live configuration reload.
	I1202 12:20:55.540158   41443 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1202 12:20:55.540163   41443 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1202 12:20:55.540170   41443 command_runner.go:130] > # the cgroup blockio controller.
	I1202 12:20:55.540174   41443 command_runner.go:130] > # blockio_config_file = ""
	I1202 12:20:55.540183   41443 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1202 12:20:55.540189   41443 command_runner.go:130] > # blockio parameters.
	I1202 12:20:55.540193   41443 command_runner.go:130] > # blockio_reload = false
	I1202 12:20:55.540199   41443 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1202 12:20:55.540205   41443 command_runner.go:130] > # irqbalance daemon.
	I1202 12:20:55.540210   41443 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1202 12:20:55.540222   41443 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1202 12:20:55.540243   41443 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1202 12:20:55.540258   41443 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1202 12:20:55.540267   41443 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1202 12:20:55.540273   41443 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1202 12:20:55.540285   41443 command_runner.go:130] > # This option supports live configuration reload.
	I1202 12:20:55.540291   41443 command_runner.go:130] > # rdt_config_file = ""
	I1202 12:20:55.540297   41443 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1202 12:20:55.540303   41443 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1202 12:20:55.540339   41443 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1202 12:20:55.540348   41443 command_runner.go:130] > # separate_pull_cgroup = ""
	I1202 12:20:55.540353   41443 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1202 12:20:55.540359   41443 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1202 12:20:55.540367   41443 command_runner.go:130] > # will be added.
	I1202 12:20:55.540371   41443 command_runner.go:130] > # default_capabilities = [
	I1202 12:20:55.540377   41443 command_runner.go:130] > # 	"CHOWN",
	I1202 12:20:55.540381   41443 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1202 12:20:55.540387   41443 command_runner.go:130] > # 	"FSETID",
	I1202 12:20:55.540390   41443 command_runner.go:130] > # 	"FOWNER",
	I1202 12:20:55.540397   41443 command_runner.go:130] > # 	"SETGID",
	I1202 12:20:55.540400   41443 command_runner.go:130] > # 	"SETUID",
	I1202 12:20:55.540406   41443 command_runner.go:130] > # 	"SETPCAP",
	I1202 12:20:55.540411   41443 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1202 12:20:55.540416   41443 command_runner.go:130] > # 	"KILL",
	I1202 12:20:55.540419   41443 command_runner.go:130] > # ]
	I1202 12:20:55.540429   41443 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1202 12:20:55.540437   41443 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1202 12:20:55.540443   41443 command_runner.go:130] > # add_inheritable_capabilities = false
	I1202 12:20:55.540448   41443 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1202 12:20:55.540456   41443 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 12:20:55.540459   41443 command_runner.go:130] > default_sysctls = [
	I1202 12:20:55.540466   41443 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1202 12:20:55.540469   41443 command_runner.go:130] > ]
	I1202 12:20:55.540476   41443 command_runner.go:130] > # List of devices on the host that a
	I1202 12:20:55.540482   41443 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1202 12:20:55.540488   41443 command_runner.go:130] > # allowed_devices = [
	I1202 12:20:55.540492   41443 command_runner.go:130] > # 	"/dev/fuse",
	I1202 12:20:55.540497   41443 command_runner.go:130] > # ]
	I1202 12:20:55.540506   41443 command_runner.go:130] > # List of additional devices. specified as
	I1202 12:20:55.540515   41443 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1202 12:20:55.540523   41443 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1202 12:20:55.540534   41443 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1202 12:20:55.540540   41443 command_runner.go:130] > # additional_devices = [
	I1202 12:20:55.540544   41443 command_runner.go:130] > # ]
	I1202 12:20:55.540551   41443 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1202 12:20:55.540555   41443 command_runner.go:130] > # cdi_spec_dirs = [
	I1202 12:20:55.540561   41443 command_runner.go:130] > # 	"/etc/cdi",
	I1202 12:20:55.540566   41443 command_runner.go:130] > # 	"/var/run/cdi",
	I1202 12:20:55.540574   41443 command_runner.go:130] > # ]
	I1202 12:20:55.540587   41443 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1202 12:20:55.540599   41443 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1202 12:20:55.540609   41443 command_runner.go:130] > # Defaults to false.
	I1202 12:20:55.540617   41443 command_runner.go:130] > # device_ownership_from_security_context = false
	I1202 12:20:55.540630   41443 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1202 12:20:55.540642   41443 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1202 12:20:55.540659   41443 command_runner.go:130] > # hooks_dir = [
	I1202 12:20:55.540670   41443 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1202 12:20:55.540676   41443 command_runner.go:130] > # ]
	I1202 12:20:55.540682   41443 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1202 12:20:55.540690   41443 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1202 12:20:55.540696   41443 command_runner.go:130] > # its default mounts from the following two files:
	I1202 12:20:55.540698   41443 command_runner.go:130] > #
	I1202 12:20:55.540704   41443 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1202 12:20:55.540712   41443 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1202 12:20:55.540720   41443 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1202 12:20:55.540728   41443 command_runner.go:130] > #
	I1202 12:20:55.540742   41443 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1202 12:20:55.540755   41443 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1202 12:20:55.540768   41443 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1202 12:20:55.540778   41443 command_runner.go:130] > #      only add mounts it finds in this file.
	I1202 12:20:55.540786   41443 command_runner.go:130] > #
	I1202 12:20:55.540800   41443 command_runner.go:130] > # default_mounts_file = ""
	I1202 12:20:55.540812   41443 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1202 12:20:55.540826   41443 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1202 12:20:55.540835   41443 command_runner.go:130] > pids_limit = 1024
	I1202 12:20:55.540847   41443 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1202 12:20:55.540855   41443 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1202 12:20:55.540863   41443 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1202 12:20:55.540873   41443 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1202 12:20:55.540879   41443 command_runner.go:130] > # log_size_max = -1
	I1202 12:20:55.540888   41443 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1202 12:20:55.540895   41443 command_runner.go:130] > # log_to_journald = false
	I1202 12:20:55.540901   41443 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1202 12:20:55.540907   41443 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1202 12:20:55.540912   41443 command_runner.go:130] > # Path to directory for container attach sockets.
	I1202 12:20:55.540919   41443 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1202 12:20:55.540924   41443 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1202 12:20:55.540930   41443 command_runner.go:130] > # bind_mount_prefix = ""
	I1202 12:20:55.540936   41443 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1202 12:20:55.540941   41443 command_runner.go:130] > # read_only = false
	I1202 12:20:55.540951   41443 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1202 12:20:55.540964   41443 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1202 12:20:55.540973   41443 command_runner.go:130] > # live configuration reload.
	I1202 12:20:55.540979   41443 command_runner.go:130] > # log_level = "info"
	I1202 12:20:55.540991   41443 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1202 12:20:55.541002   41443 command_runner.go:130] > # This option supports live configuration reload.
	I1202 12:20:55.541013   41443 command_runner.go:130] > # log_filter = ""
	I1202 12:20:55.541025   41443 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1202 12:20:55.541038   41443 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1202 12:20:55.541047   41443 command_runner.go:130] > # separated by comma.
	I1202 12:20:55.541062   41443 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 12:20:55.541072   41443 command_runner.go:130] > # uid_mappings = ""
	I1202 12:20:55.541081   41443 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1202 12:20:55.541094   41443 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1202 12:20:55.541115   41443 command_runner.go:130] > # separated by comma.
	I1202 12:20:55.541134   41443 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 12:20:55.541143   41443 command_runner.go:130] > # gid_mappings = ""
	I1202 12:20:55.541152   41443 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1202 12:20:55.541160   41443 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 12:20:55.541166   41443 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 12:20:55.541176   41443 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 12:20:55.541182   41443 command_runner.go:130] > # minimum_mappable_uid = -1
	I1202 12:20:55.541188   41443 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1202 12:20:55.541196   41443 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1202 12:20:55.541202   41443 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1202 12:20:55.541212   41443 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1202 12:20:55.541221   41443 command_runner.go:130] > # minimum_mappable_gid = -1
	I1202 12:20:55.541229   41443 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1202 12:20:55.541234   41443 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1202 12:20:55.541242   41443 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1202 12:20:55.541245   41443 command_runner.go:130] > # ctr_stop_timeout = 30
	I1202 12:20:55.541254   41443 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1202 12:20:55.541259   41443 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1202 12:20:55.541266   41443 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1202 12:20:55.541271   41443 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1202 12:20:55.541277   41443 command_runner.go:130] > drop_infra_ctr = false
	I1202 12:20:55.541283   41443 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1202 12:20:55.541291   41443 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1202 12:20:55.541301   41443 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1202 12:20:55.541307   41443 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1202 12:20:55.541313   41443 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1202 12:20:55.541322   41443 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1202 12:20:55.541327   41443 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1202 12:20:55.541334   41443 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1202 12:20:55.541338   41443 command_runner.go:130] > # shared_cpuset = ""
	I1202 12:20:55.541346   41443 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1202 12:20:55.541350   41443 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1202 12:20:55.541365   41443 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1202 12:20:55.541375   41443 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1202 12:20:55.541381   41443 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1202 12:20:55.541386   41443 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1202 12:20:55.541395   41443 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1202 12:20:55.541402   41443 command_runner.go:130] > # enable_criu_support = false
	I1202 12:20:55.541407   41443 command_runner.go:130] > # Enable/disable the generation of the container,
	I1202 12:20:55.541414   41443 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1202 12:20:55.541421   41443 command_runner.go:130] > # enable_pod_events = false
	I1202 12:20:55.541428   41443 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 12:20:55.541440   41443 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1202 12:20:55.541452   41443 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1202 12:20:55.541461   41443 command_runner.go:130] > # default_runtime = "runc"
	I1202 12:20:55.541472   41443 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1202 12:20:55.541487   41443 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1202 12:20:55.541504   41443 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1202 12:20:55.541514   41443 command_runner.go:130] > # creation as a file is not desired either.
	I1202 12:20:55.541524   41443 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1202 12:20:55.541531   41443 command_runner.go:130] > # the hostname is being managed dynamically.
	I1202 12:20:55.541535   41443 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1202 12:20:55.541541   41443 command_runner.go:130] > # ]
	I1202 12:20:55.541547   41443 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1202 12:20:55.541555   41443 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1202 12:20:55.541560   41443 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1202 12:20:55.541572   41443 command_runner.go:130] > # Each entry in the table should follow the format:
	I1202 12:20:55.541577   41443 command_runner.go:130] > #
	I1202 12:20:55.541584   41443 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1202 12:20:55.541592   41443 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1202 12:20:55.541654   41443 command_runner.go:130] > # runtime_type = "oci"
	I1202 12:20:55.541665   41443 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1202 12:20:55.541671   41443 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1202 12:20:55.541675   41443 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1202 12:20:55.541680   41443 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1202 12:20:55.541689   41443 command_runner.go:130] > # monitor_env = []
	I1202 12:20:55.541696   41443 command_runner.go:130] > # privileged_without_host_devices = false
	I1202 12:20:55.541700   41443 command_runner.go:130] > # allowed_annotations = []
	I1202 12:20:55.541705   41443 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1202 12:20:55.541711   41443 command_runner.go:130] > # Where:
	I1202 12:20:55.541716   41443 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1202 12:20:55.541724   41443 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1202 12:20:55.541730   41443 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1202 12:20:55.541736   41443 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1202 12:20:55.541740   41443 command_runner.go:130] > #   in $PATH.
	I1202 12:20:55.541747   41443 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1202 12:20:55.541754   41443 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1202 12:20:55.541760   41443 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1202 12:20:55.541766   41443 command_runner.go:130] > #   state.
	I1202 12:20:55.541771   41443 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1202 12:20:55.541777   41443 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1202 12:20:55.541785   41443 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1202 12:20:55.541790   41443 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1202 12:20:55.541798   41443 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1202 12:20:55.541805   41443 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1202 12:20:55.541813   41443 command_runner.go:130] > #   The currently recognized values are:
	I1202 12:20:55.541819   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1202 12:20:55.541828   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1202 12:20:55.541836   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1202 12:20:55.541844   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1202 12:20:55.541851   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1202 12:20:55.541859   41443 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1202 12:20:55.541865   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1202 12:20:55.541873   41443 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1202 12:20:55.541879   41443 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1202 12:20:55.541887   41443 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1202 12:20:55.541891   41443 command_runner.go:130] > #   deprecated option "conmon".
	I1202 12:20:55.541900   41443 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1202 12:20:55.541915   41443 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1202 12:20:55.541923   41443 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1202 12:20:55.541928   41443 command_runner.go:130] > #   should be moved to the container's cgroup
	I1202 12:20:55.541938   41443 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1202 12:20:55.541942   41443 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1202 12:20:55.541948   41443 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1202 12:20:55.541956   41443 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1202 12:20:55.541958   41443 command_runner.go:130] > #
	I1202 12:20:55.541963   41443 command_runner.go:130] > # Using the seccomp notifier feature:
	I1202 12:20:55.541968   41443 command_runner.go:130] > #
	I1202 12:20:55.541973   41443 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1202 12:20:55.541982   41443 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1202 12:20:55.541985   41443 command_runner.go:130] > #
	I1202 12:20:55.541991   41443 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1202 12:20:55.541999   41443 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1202 12:20:55.542002   41443 command_runner.go:130] > #
	I1202 12:20:55.542008   41443 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1202 12:20:55.542013   41443 command_runner.go:130] > # feature.
	I1202 12:20:55.542016   41443 command_runner.go:130] > #
	I1202 12:20:55.542022   41443 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1202 12:20:55.542030   41443 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1202 12:20:55.542036   41443 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1202 12:20:55.542046   41443 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1202 12:20:55.542052   41443 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1202 12:20:55.542056   41443 command_runner.go:130] > #
	I1202 12:20:55.542061   41443 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1202 12:20:55.542069   41443 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1202 12:20:55.542073   41443 command_runner.go:130] > #
	I1202 12:20:55.542078   41443 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1202 12:20:55.542085   41443 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1202 12:20:55.542088   41443 command_runner.go:130] > #
	I1202 12:20:55.542094   41443 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1202 12:20:55.542102   41443 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1202 12:20:55.542115   41443 command_runner.go:130] > # limitation.
	I1202 12:20:55.542122   41443 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1202 12:20:55.542126   41443 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1202 12:20:55.542129   41443 command_runner.go:130] > runtime_type = "oci"
	I1202 12:20:55.542133   41443 command_runner.go:130] > runtime_root = "/run/runc"
	I1202 12:20:55.542137   41443 command_runner.go:130] > runtime_config_path = ""
	I1202 12:20:55.542142   41443 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1202 12:20:55.542146   41443 command_runner.go:130] > monitor_cgroup = "pod"
	I1202 12:20:55.542150   41443 command_runner.go:130] > monitor_exec_cgroup = ""
	I1202 12:20:55.542153   41443 command_runner.go:130] > monitor_env = [
	I1202 12:20:55.542159   41443 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1202 12:20:55.542163   41443 command_runner.go:130] > ]
	I1202 12:20:55.542168   41443 command_runner.go:130] > privileged_without_host_devices = false
	I1202 12:20:55.542176   41443 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1202 12:20:55.542181   41443 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1202 12:20:55.542188   41443 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1202 12:20:55.542195   41443 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1202 12:20:55.542204   41443 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1202 12:20:55.542212   41443 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1202 12:20:55.542220   41443 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1202 12:20:55.542230   41443 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1202 12:20:55.542235   41443 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1202 12:20:55.542242   41443 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1202 12:20:55.542244   41443 command_runner.go:130] > # Example:
	I1202 12:20:55.542249   41443 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1202 12:20:55.542252   41443 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1202 12:20:55.542259   41443 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1202 12:20:55.542263   41443 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1202 12:20:55.542266   41443 command_runner.go:130] > # cpuset = 0
	I1202 12:20:55.542269   41443 command_runner.go:130] > # cpushares = "0-1"
	I1202 12:20:55.542272   41443 command_runner.go:130] > # Where:
	I1202 12:20:55.542277   41443 command_runner.go:130] > # The workload name is workload-type.
	I1202 12:20:55.542283   41443 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1202 12:20:55.542292   41443 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1202 12:20:55.542296   41443 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1202 12:20:55.542303   41443 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1202 12:20:55.542308   41443 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1202 12:20:55.542312   41443 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1202 12:20:55.542318   41443 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1202 12:20:55.542321   41443 command_runner.go:130] > # Default value is set to true
	I1202 12:20:55.542325   41443 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1202 12:20:55.542330   41443 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1202 12:20:55.542334   41443 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1202 12:20:55.542338   41443 command_runner.go:130] > # Default value is set to 'false'
	I1202 12:20:55.542342   41443 command_runner.go:130] > # disable_hostport_mapping = false
	I1202 12:20:55.542353   41443 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1202 12:20:55.542356   41443 command_runner.go:130] > #
	I1202 12:20:55.542363   41443 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1202 12:20:55.542369   41443 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1202 12:20:55.542375   41443 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1202 12:20:55.542380   41443 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1202 12:20:55.542385   41443 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1202 12:20:55.542388   41443 command_runner.go:130] > [crio.image]
	I1202 12:20:55.542393   41443 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1202 12:20:55.542397   41443 command_runner.go:130] > # default_transport = "docker://"
	I1202 12:20:55.542402   41443 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1202 12:20:55.542408   41443 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1202 12:20:55.542411   41443 command_runner.go:130] > # global_auth_file = ""
	I1202 12:20:55.542416   41443 command_runner.go:130] > # The image used to instantiate infra containers.
	I1202 12:20:55.542423   41443 command_runner.go:130] > # This option supports live configuration reload.
	I1202 12:20:55.542427   41443 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1202 12:20:55.542433   41443 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1202 12:20:55.542441   41443 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1202 12:20:55.542449   41443 command_runner.go:130] > # This option supports live configuration reload.
	I1202 12:20:55.542455   41443 command_runner.go:130] > # pause_image_auth_file = ""
	I1202 12:20:55.542462   41443 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1202 12:20:55.542472   41443 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1202 12:20:55.542480   41443 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1202 12:20:55.542485   41443 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1202 12:20:55.542490   41443 command_runner.go:130] > # pause_command = "/pause"
	I1202 12:20:55.542496   41443 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1202 12:20:55.542509   41443 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1202 12:20:55.542514   41443 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1202 12:20:55.542522   41443 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1202 12:20:55.542527   41443 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1202 12:20:55.542535   41443 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1202 12:20:55.542539   41443 command_runner.go:130] > # pinned_images = [
	I1202 12:20:55.542544   41443 command_runner.go:130] > # ]
	I1202 12:20:55.542549   41443 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1202 12:20:55.542558   41443 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1202 12:20:55.542564   41443 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1202 12:20:55.542576   41443 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1202 12:20:55.542587   41443 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1202 12:20:55.542597   41443 command_runner.go:130] > # signature_policy = ""
	I1202 12:20:55.542605   41443 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1202 12:20:55.542618   41443 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1202 12:20:55.542631   41443 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1202 12:20:55.542643   41443 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1202 12:20:55.542655   41443 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1202 12:20:55.542665   41443 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1202 12:20:55.542673   41443 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1202 12:20:55.542681   41443 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1202 12:20:55.542685   41443 command_runner.go:130] > # changing them here.
	I1202 12:20:55.542691   41443 command_runner.go:130] > # insecure_registries = [
	I1202 12:20:55.542694   41443 command_runner.go:130] > # ]
	I1202 12:20:55.542700   41443 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1202 12:20:55.542707   41443 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1202 12:20:55.542711   41443 command_runner.go:130] > # image_volumes = "mkdir"
	I1202 12:20:55.542716   41443 command_runner.go:130] > # Temporary directory to use for storing big files
	I1202 12:20:55.542728   41443 command_runner.go:130] > # big_files_temporary_dir = ""
	I1202 12:20:55.542748   41443 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1202 12:20:55.542754   41443 command_runner.go:130] > # CNI plugins.
	I1202 12:20:55.542758   41443 command_runner.go:130] > [crio.network]
	I1202 12:20:55.542763   41443 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1202 12:20:55.542769   41443 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1202 12:20:55.542774   41443 command_runner.go:130] > # cni_default_network = ""
	I1202 12:20:55.542779   41443 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1202 12:20:55.542785   41443 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1202 12:20:55.542790   41443 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1202 12:20:55.542794   41443 command_runner.go:130] > # plugin_dirs = [
	I1202 12:20:55.542798   41443 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1202 12:20:55.542801   41443 command_runner.go:130] > # ]
	I1202 12:20:55.542807   41443 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1202 12:20:55.542811   41443 command_runner.go:130] > [crio.metrics]
	I1202 12:20:55.542816   41443 command_runner.go:130] > # Globally enable or disable metrics support.
	I1202 12:20:55.542820   41443 command_runner.go:130] > enable_metrics = true
	I1202 12:20:55.542825   41443 command_runner.go:130] > # Specify enabled metrics collectors.
	I1202 12:20:55.542832   41443 command_runner.go:130] > # Per default all metrics are enabled.
	I1202 12:20:55.542838   41443 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1202 12:20:55.542846   41443 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1202 12:20:55.542852   41443 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1202 12:20:55.542858   41443 command_runner.go:130] > # metrics_collectors = [
	I1202 12:20:55.542861   41443 command_runner.go:130] > # 	"operations",
	I1202 12:20:55.542865   41443 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1202 12:20:55.542870   41443 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1202 12:20:55.542874   41443 command_runner.go:130] > # 	"operations_errors",
	I1202 12:20:55.542878   41443 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1202 12:20:55.542882   41443 command_runner.go:130] > # 	"image_pulls_by_name",
	I1202 12:20:55.542886   41443 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1202 12:20:55.542890   41443 command_runner.go:130] > # 	"image_pulls_failures",
	I1202 12:20:55.542894   41443 command_runner.go:130] > # 	"image_pulls_successes",
	I1202 12:20:55.542898   41443 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1202 12:20:55.542906   41443 command_runner.go:130] > # 	"image_layer_reuse",
	I1202 12:20:55.542912   41443 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1202 12:20:55.542916   41443 command_runner.go:130] > # 	"containers_oom_total",
	I1202 12:20:55.542920   41443 command_runner.go:130] > # 	"containers_oom",
	I1202 12:20:55.542924   41443 command_runner.go:130] > # 	"processes_defunct",
	I1202 12:20:55.542928   41443 command_runner.go:130] > # 	"operations_total",
	I1202 12:20:55.542932   41443 command_runner.go:130] > # 	"operations_latency_seconds",
	I1202 12:20:55.542936   41443 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1202 12:20:55.542940   41443 command_runner.go:130] > # 	"operations_errors_total",
	I1202 12:20:55.542944   41443 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1202 12:20:55.542948   41443 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1202 12:20:55.542954   41443 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1202 12:20:55.542958   41443 command_runner.go:130] > # 	"image_pulls_success_total",
	I1202 12:20:55.542967   41443 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1202 12:20:55.542971   41443 command_runner.go:130] > # 	"containers_oom_count_total",
	I1202 12:20:55.542977   41443 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1202 12:20:55.542981   41443 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1202 12:20:55.542984   41443 command_runner.go:130] > # ]
	I1202 12:20:55.542989   41443 command_runner.go:130] > # The port on which the metrics server will listen.
	I1202 12:20:55.542994   41443 command_runner.go:130] > # metrics_port = 9090
	I1202 12:20:55.542998   41443 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1202 12:20:55.543004   41443 command_runner.go:130] > # metrics_socket = ""
	I1202 12:20:55.543009   41443 command_runner.go:130] > # The certificate for the secure metrics server.
	I1202 12:20:55.543014   41443 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1202 12:20:55.543021   41443 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1202 12:20:55.543025   41443 command_runner.go:130] > # certificate on any modification event.
	I1202 12:20:55.543030   41443 command_runner.go:130] > # metrics_cert = ""
	I1202 12:20:55.543035   41443 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1202 12:20:55.543042   41443 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1202 12:20:55.543046   41443 command_runner.go:130] > # metrics_key = ""
	I1202 12:20:55.543054   41443 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1202 12:20:55.543058   41443 command_runner.go:130] > [crio.tracing]
	I1202 12:20:55.543065   41443 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1202 12:20:55.543073   41443 command_runner.go:130] > # enable_tracing = false
	I1202 12:20:55.543081   41443 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1202 12:20:55.543085   41443 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1202 12:20:55.543091   41443 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1202 12:20:55.543098   41443 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1202 12:20:55.543101   41443 command_runner.go:130] > # CRI-O NRI configuration.
	I1202 12:20:55.543105   41443 command_runner.go:130] > [crio.nri]
	I1202 12:20:55.543113   41443 command_runner.go:130] > # Globally enable or disable NRI.
	I1202 12:20:55.543120   41443 command_runner.go:130] > # enable_nri = false
	I1202 12:20:55.543124   41443 command_runner.go:130] > # NRI socket to listen on.
	I1202 12:20:55.543128   41443 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1202 12:20:55.543132   41443 command_runner.go:130] > # NRI plugin directory to use.
	I1202 12:20:55.543137   41443 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1202 12:20:55.543143   41443 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1202 12:20:55.543147   41443 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1202 12:20:55.543153   41443 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1202 12:20:55.543157   41443 command_runner.go:130] > # nri_disable_connections = false
	I1202 12:20:55.543164   41443 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1202 12:20:55.543169   41443 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1202 12:20:55.543176   41443 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1202 12:20:55.543180   41443 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1202 12:20:55.543185   41443 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1202 12:20:55.543191   41443 command_runner.go:130] > [crio.stats]
	I1202 12:20:55.543200   41443 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1202 12:20:55.543207   41443 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1202 12:20:55.543211   41443 command_runner.go:130] > # stats_collection_period = 0
	I1202 12:20:55.543291   41443 cni.go:84] Creating CNI manager for ""
	I1202 12:20:55.543302   41443 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 12:20:55.543311   41443 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 12:20:55.543333   41443 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.135 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-191330 NodeName:multinode-191330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 12:20:55.543459   41443 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-191330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.135"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.135"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 12:20:55.543519   41443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 12:20:55.553911   41443 command_runner.go:130] > kubeadm
	I1202 12:20:55.553926   41443 command_runner.go:130] > kubectl
	I1202 12:20:55.553930   41443 command_runner.go:130] > kubelet
	I1202 12:20:55.553948   41443 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 12:20:55.553994   41443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 12:20:55.563366   41443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1202 12:20:55.579319   41443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 12:20:55.595269   41443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1202 12:20:55.611869   41443 ssh_runner.go:195] Run: grep 192.168.39.135	control-plane.minikube.internal$ /etc/hosts
	I1202 12:20:55.615648   41443 command_runner.go:130] > 192.168.39.135	control-plane.minikube.internal
	I1202 12:20:55.615856   41443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:20:55.756053   41443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:20:55.770815   41443 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330 for IP: 192.168.39.135
	I1202 12:20:55.770831   41443 certs.go:194] generating shared ca certs ...
	I1202 12:20:55.770845   41443 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:20:55.770970   41443 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 12:20:55.771013   41443 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 12:20:55.771022   41443 certs.go:256] generating profile certs ...
	I1202 12:20:55.771099   41443 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/client.key
	I1202 12:20:55.771161   41443 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/apiserver.key.cbfd379d
	I1202 12:20:55.771198   41443 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/proxy-client.key
	I1202 12:20:55.771208   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 12:20:55.771219   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 12:20:55.771232   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 12:20:55.771241   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 12:20:55.771258   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 12:20:55.771274   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 12:20:55.771286   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 12:20:55.771298   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 12:20:55.771343   41443 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 12:20:55.771383   41443 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 12:20:55.771392   41443 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 12:20:55.771415   41443 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 12:20:55.771439   41443 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 12:20:55.771459   41443 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 12:20:55.771498   41443 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:20:55.771522   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> /usr/share/ca-certificates/134162.pem
	I1202 12:20:55.771565   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:20:55.771580   41443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem -> /usr/share/ca-certificates/13416.pem
	I1202 12:20:55.772151   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 12:20:55.795885   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 12:20:55.842071   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 12:20:55.868301   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 12:20:55.894659   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 12:20:55.922601   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 12:20:55.948867   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 12:20:55.975123   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/multinode-191330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 12:20:55.998127   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 12:20:56.021914   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 12:20:56.044642   41443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 12:20:56.067558   41443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 12:20:56.083404   41443 ssh_runner.go:195] Run: openssl version
	I1202 12:20:56.089051   41443 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1202 12:20:56.089126   41443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 12:20:56.099631   41443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 12:20:56.103910   41443 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:20:56.104092   41443 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:20:56.104127   41443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 12:20:56.109547   41443 command_runner.go:130] > 51391683
	I1202 12:20:56.109776   41443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 12:20:56.118628   41443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 12:20:56.129157   41443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 12:20:56.133320   41443 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:20:56.133356   41443 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:20:56.133392   41443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 12:20:56.138787   41443 command_runner.go:130] > 3ec20f2e
	I1202 12:20:56.138844   41443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 12:20:56.147932   41443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 12:20:56.158232   41443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:20:56.162263   41443 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:20:56.162385   41443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:20:56.162420   41443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:20:56.167552   41443 command_runner.go:130] > b5213941
	I1202 12:20:56.167784   41443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 12:20:56.176489   41443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:20:56.180837   41443 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:20:56.180852   41443 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 12:20:56.180858   41443 command_runner.go:130] > Device: 253,1	Inode: 3150382     Links: 1
	I1202 12:20:56.180866   41443 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 12:20:56.180877   41443 command_runner.go:130] > Access: 2024-12-02 12:14:17.855935809 +0000
	I1202 12:20:56.180890   41443 command_runner.go:130] > Modify: 2024-12-02 12:14:17.855935809 +0000
	I1202 12:20:56.180898   41443 command_runner.go:130] > Change: 2024-12-02 12:14:17.855935809 +0000
	I1202 12:20:56.180907   41443 command_runner.go:130] >  Birth: 2024-12-02 12:14:17.855935809 +0000
	I1202 12:20:56.181027   41443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 12:20:56.186424   41443 command_runner.go:130] > Certificate will not expire
	I1202 12:20:56.186496   41443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 12:20:56.191814   41443 command_runner.go:130] > Certificate will not expire
	I1202 12:20:56.191848   41443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 12:20:56.196907   41443 command_runner.go:130] > Certificate will not expire
	I1202 12:20:56.197034   41443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 12:20:56.202125   41443 command_runner.go:130] > Certificate will not expire
	I1202 12:20:56.202319   41443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 12:20:56.207764   41443 command_runner.go:130] > Certificate will not expire
	I1202 12:20:56.208023   41443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 12:20:56.213375   41443 command_runner.go:130] > Certificate will not expire
	I1202 12:20:56.213439   41443 kubeadm.go:392] StartCluster: {Name:multinode-191330 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-191330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.237 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:20:56.213551   41443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 12:20:56.213596   41443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:20:56.249377   41443 command_runner.go:130] > 033e9fdb82e3a453176eee4642267e4a17886d07c3831daab5ce66ef3578add8
	I1202 12:20:56.249395   41443 command_runner.go:130] > dc719184399ec6d98e14b0357d9c6ccb13904046023aae554d114e597606fdcd
	I1202 12:20:56.249401   41443 command_runner.go:130] > 79092084dc96f8588b9f2585e6e87ac26d73e584d9c2f6ecb36d4684787cb922
	I1202 12:20:56.249408   41443 command_runner.go:130] > 3720b0a3bc4d341bd1ad62ba26fc92aaf82f3292ab7a071010b806583f4fefe2
	I1202 12:20:56.249413   41443 command_runner.go:130] > 17338e5fa590eed42ecd771141d27b0642808ec6b373ca6b79282469cb80efab
	I1202 12:20:56.249418   41443 command_runner.go:130] > 578fa09f2e10474a35a428f12ab7c18b2f10f1622c251557f459dbe3b8c45e32
	I1202 12:20:56.249423   41443 command_runner.go:130] > 947ad842d5a96b79b1af99d79c6a81a4e264e14d2f878244a946aeec7e6716c0
	I1202 12:20:56.249430   41443 command_runner.go:130] > 3ceedb678ad61438963893a96dce32fb183748948869e8f30ce1161ff6d76fcc
	I1202 12:20:56.249447   41443 cri.go:89] found id: "033e9fdb82e3a453176eee4642267e4a17886d07c3831daab5ce66ef3578add8"
	I1202 12:20:56.249454   41443 cri.go:89] found id: "dc719184399ec6d98e14b0357d9c6ccb13904046023aae554d114e597606fdcd"
	I1202 12:20:56.249458   41443 cri.go:89] found id: "79092084dc96f8588b9f2585e6e87ac26d73e584d9c2f6ecb36d4684787cb922"
	I1202 12:20:56.249461   41443 cri.go:89] found id: "3720b0a3bc4d341bd1ad62ba26fc92aaf82f3292ab7a071010b806583f4fefe2"
	I1202 12:20:56.249463   41443 cri.go:89] found id: "17338e5fa590eed42ecd771141d27b0642808ec6b373ca6b79282469cb80efab"
	I1202 12:20:56.249467   41443 cri.go:89] found id: "578fa09f2e10474a35a428f12ab7c18b2f10f1622c251557f459dbe3b8c45e32"
	I1202 12:20:56.249470   41443 cri.go:89] found id: "947ad842d5a96b79b1af99d79c6a81a4e264e14d2f878244a946aeec7e6716c0"
	I1202 12:20:56.249472   41443 cri.go:89] found id: "3ceedb678ad61438963893a96dce32fb183748948869e8f30ce1161ff6d76fcc"
	I1202 12:20:56.249475   41443 cri.go:89] found id: ""
	I1202 12:20:56.249502   41443 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-191330 -n multinode-191330
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-191330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.03s)

                                                
                                    
x
+
TestPreload (164.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-404682 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1202 12:30:01.370334   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-404682 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m30.552499815s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-404682 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-404682 image pull gcr.io/k8s-minikube/busybox: (2.13564475s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-404682
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-404682: (7.28999803s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-404682 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-404682 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.614666039s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-404682 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-12-02 12:31:36.100033124 +0000 UTC m=+3674.931818141
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-404682 -n test-preload-404682
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-404682 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-404682 logs -n 25: (1.009919275s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n multinode-191330 sudo cat                                       | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /home/docker/cp-test_multinode-191330-m03_multinode-191330.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-191330 cp multinode-191330-m03:/home/docker/cp-test.txt                       | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m02:/home/docker/cp-test_multinode-191330-m03_multinode-191330-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n                                                                 | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | multinode-191330-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-191330 ssh -n multinode-191330-m02 sudo cat                                   | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	|         | /home/docker/cp-test_multinode-191330-m03_multinode-191330-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-191330 node stop m03                                                          | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:16 UTC |
	| node    | multinode-191330 node start                                                             | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:16 UTC | 02 Dec 24 12:17 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-191330                                                                | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:17 UTC |                     |
	| stop    | -p multinode-191330                                                                     | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:17 UTC |                     |
	| start   | -p multinode-191330                                                                     | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:19 UTC | 02 Dec 24 12:22 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-191330                                                                | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:22 UTC |                     |
	| node    | multinode-191330 node delete                                                            | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:22 UTC | 02 Dec 24 12:22 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-191330 stop                                                                   | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:22 UTC |                     |
	| start   | -p multinode-191330                                                                     | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:25 UTC | 02 Dec 24 12:28 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-191330                                                                | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:28 UTC |                     |
	| start   | -p multinode-191330-m02                                                                 | multinode-191330-m02 | jenkins | v1.34.0 | 02 Dec 24 12:28 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-191330-m03                                                                 | multinode-191330-m03 | jenkins | v1.34.0 | 02 Dec 24 12:28 UTC | 02 Dec 24 12:28 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-191330                                                                 | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:28 UTC |                     |
	| delete  | -p multinode-191330-m03                                                                 | multinode-191330-m03 | jenkins | v1.34.0 | 02 Dec 24 12:28 UTC | 02 Dec 24 12:28 UTC |
	| delete  | -p multinode-191330                                                                     | multinode-191330     | jenkins | v1.34.0 | 02 Dec 24 12:28 UTC | 02 Dec 24 12:28 UTC |
	| start   | -p test-preload-404682                                                                  | test-preload-404682  | jenkins | v1.34.0 | 02 Dec 24 12:28 UTC | 02 Dec 24 12:30 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-404682 image pull                                                          | test-preload-404682  | jenkins | v1.34.0 | 02 Dec 24 12:30 UTC | 02 Dec 24 12:30 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-404682                                                                  | test-preload-404682  | jenkins | v1.34.0 | 02 Dec 24 12:30 UTC | 02 Dec 24 12:30 UTC |
	| start   | -p test-preload-404682                                                                  | test-preload-404682  | jenkins | v1.34.0 | 02 Dec 24 12:30 UTC | 02 Dec 24 12:31 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-404682 image list                                                          | test-preload-404682  | jenkins | v1.34.0 | 02 Dec 24 12:31 UTC | 02 Dec 24 12:31 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 12:30:34
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 12:30:34.311924   45666 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:30:34.312148   45666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:30:34.312156   45666 out.go:358] Setting ErrFile to fd 2...
	I1202 12:30:34.312161   45666 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:30:34.312374   45666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:30:34.312856   45666 out.go:352] Setting JSON to false
	I1202 12:30:34.313665   45666 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4386,"bootTime":1733138248,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:30:34.313758   45666 start.go:139] virtualization: kvm guest
	I1202 12:30:34.315887   45666 out.go:177] * [test-preload-404682] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:30:34.316975   45666 notify.go:220] Checking for updates...
	I1202 12:30:34.317012   45666 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:30:34.318429   45666 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:30:34.319620   45666 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:30:34.320673   45666 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:30:34.321623   45666 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:30:34.322721   45666 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:30:34.324220   45666 config.go:182] Loaded profile config "test-preload-404682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1202 12:30:34.324624   45666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:30:34.324677   45666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:30:34.340512   45666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32867
	I1202 12:30:34.340862   45666 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:30:34.341384   45666 main.go:141] libmachine: Using API Version  1
	I1202 12:30:34.341407   45666 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:30:34.341781   45666 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:30:34.341950   45666 main.go:141] libmachine: (test-preload-404682) Calling .DriverName
	I1202 12:30:34.343482   45666 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1202 12:30:34.344543   45666 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:30:34.344801   45666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:30:34.344834   45666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:30:34.358622   45666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46659
	I1202 12:30:34.359019   45666 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:30:34.359467   45666 main.go:141] libmachine: Using API Version  1
	I1202 12:30:34.359488   45666 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:30:34.359894   45666 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:30:34.360066   45666 main.go:141] libmachine: (test-preload-404682) Calling .DriverName
	I1202 12:30:34.392506   45666 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 12:30:34.393759   45666 start.go:297] selected driver: kvm2
	I1202 12:30:34.393770   45666 start.go:901] validating driver "kvm2" against &{Name:test-preload-404682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-404682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:30:34.393871   45666 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:30:34.394492   45666 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:30:34.394545   45666 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 12:30:34.408269   45666 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 12:30:34.408685   45666 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:30:34.408720   45666 cni.go:84] Creating CNI manager for ""
	I1202 12:30:34.408771   45666 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:30:34.408847   45666 start.go:340] cluster config:
	{Name:test-preload-404682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-404682 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:30:34.408974   45666 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:30:34.411059   45666 out.go:177] * Starting "test-preload-404682" primary control-plane node in "test-preload-404682" cluster
	I1202 12:30:34.412002   45666 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1202 12:30:34.439372   45666 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1202 12:30:34.439389   45666 cache.go:56] Caching tarball of preloaded images
	I1202 12:30:34.439508   45666 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1202 12:30:34.441110   45666 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1202 12:30:34.442342   45666 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1202 12:30:34.474170   45666 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1202 12:30:38.208498   45666 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1202 12:30:38.208593   45666 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1202 12:30:39.046095   45666 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1202 12:30:39.046206   45666 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/config.json ...
	I1202 12:30:39.046502   45666 start.go:360] acquireMachinesLock for test-preload-404682: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:30:39.046563   45666 start.go:364] duration metric: took 40.81µs to acquireMachinesLock for "test-preload-404682"
	I1202 12:30:39.046580   45666 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:30:39.046585   45666 fix.go:54] fixHost starting: 
	I1202 12:30:39.046865   45666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:30:39.046899   45666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:30:39.060954   45666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39317
	I1202 12:30:39.061392   45666 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:30:39.061845   45666 main.go:141] libmachine: Using API Version  1
	I1202 12:30:39.061857   45666 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:30:39.062148   45666 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:30:39.062324   45666 main.go:141] libmachine: (test-preload-404682) Calling .DriverName
	I1202 12:30:39.062457   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetState
	I1202 12:30:39.064103   45666 fix.go:112] recreateIfNeeded on test-preload-404682: state=Stopped err=<nil>
	I1202 12:30:39.064129   45666 main.go:141] libmachine: (test-preload-404682) Calling .DriverName
	W1202 12:30:39.064256   45666 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:30:39.066028   45666 out.go:177] * Restarting existing kvm2 VM for "test-preload-404682" ...
	I1202 12:30:39.067243   45666 main.go:141] libmachine: (test-preload-404682) Calling .Start
	I1202 12:30:39.067405   45666 main.go:141] libmachine: (test-preload-404682) Ensuring networks are active...
	I1202 12:30:39.068086   45666 main.go:141] libmachine: (test-preload-404682) Ensuring network default is active
	I1202 12:30:39.068438   45666 main.go:141] libmachine: (test-preload-404682) Ensuring network mk-test-preload-404682 is active
	I1202 12:30:39.068802   45666 main.go:141] libmachine: (test-preload-404682) Getting domain xml...
	I1202 12:30:39.069393   45666 main.go:141] libmachine: (test-preload-404682) Creating domain...
	I1202 12:30:40.228988   45666 main.go:141] libmachine: (test-preload-404682) Waiting to get IP...
	I1202 12:30:40.229924   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:40.230349   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:40.230436   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:40.230336   45717 retry.go:31] will retry after 267.415775ms: waiting for machine to come up
	I1202 12:30:40.499778   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:40.500177   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:40.500198   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:40.500145   45717 retry.go:31] will retry after 307.13559ms: waiting for machine to come up
	I1202 12:30:40.808720   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:40.809158   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:40.809257   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:40.809130   45717 retry.go:31] will retry after 355.131642ms: waiting for machine to come up
	I1202 12:30:41.166314   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:41.166675   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:41.166704   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:41.166622   45717 retry.go:31] will retry after 421.352679ms: waiting for machine to come up
	I1202 12:30:41.589208   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:41.589631   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:41.589656   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:41.589589   45717 retry.go:31] will retry after 695.184728ms: waiting for machine to come up
	I1202 12:30:42.286303   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:42.286715   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:42.286743   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:42.286658   45717 retry.go:31] will retry after 594.682328ms: waiting for machine to come up
	I1202 12:30:42.883329   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:42.883792   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:42.883815   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:42.883757   45717 retry.go:31] will retry after 765.061531ms: waiting for machine to come up
	I1202 12:30:43.650621   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:43.650988   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:43.651013   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:43.650941   45717 retry.go:31] will retry after 1.146268435s: waiting for machine to come up
	I1202 12:30:44.799160   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:44.799594   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:44.799639   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:44.799549   45717 retry.go:31] will retry after 1.588342121s: waiting for machine to come up
	I1202 12:30:46.388995   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:46.389353   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:46.389387   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:46.389302   45717 retry.go:31] will retry after 2.292609402s: waiting for machine to come up
	I1202 12:30:48.683336   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:48.683787   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:48.683807   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:48.683749   45717 retry.go:31] will retry after 1.996057938s: waiting for machine to come up
	I1202 12:30:50.682197   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:50.682670   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:50.682702   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:50.682621   45717 retry.go:31] will retry after 3.360026338s: waiting for machine to come up
	I1202 12:30:54.046086   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:54.046591   45666 main.go:141] libmachine: (test-preload-404682) DBG | unable to find current IP address of domain test-preload-404682 in network mk-test-preload-404682
	I1202 12:30:54.046626   45666 main.go:141] libmachine: (test-preload-404682) DBG | I1202 12:30:54.046540   45717 retry.go:31] will retry after 2.763744463s: waiting for machine to come up
	I1202 12:30:56.813795   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:56.814292   45666 main.go:141] libmachine: (test-preload-404682) Found IP for machine: 192.168.39.206
	I1202 12:30:56.814325   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has current primary IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:56.814334   45666 main.go:141] libmachine: (test-preload-404682) Reserving static IP address...
	I1202 12:30:56.814711   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "test-preload-404682", mac: "52:54:00:a4:26:5c", ip: "192.168.39.206"} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:56.814751   45666 main.go:141] libmachine: (test-preload-404682) DBG | skip adding static IP to network mk-test-preload-404682 - found existing host DHCP lease matching {name: "test-preload-404682", mac: "52:54:00:a4:26:5c", ip: "192.168.39.206"}
	I1202 12:30:56.814762   45666 main.go:141] libmachine: (test-preload-404682) Reserved static IP address: 192.168.39.206
	I1202 12:30:56.814780   45666 main.go:141] libmachine: (test-preload-404682) Waiting for SSH to be available...
	I1202 12:30:56.814822   45666 main.go:141] libmachine: (test-preload-404682) DBG | Getting to WaitForSSH function...
	I1202 12:30:56.816914   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:56.817224   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:56.817256   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:56.817359   45666 main.go:141] libmachine: (test-preload-404682) DBG | Using SSH client type: external
	I1202 12:30:56.817378   45666 main.go:141] libmachine: (test-preload-404682) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/test-preload-404682/id_rsa (-rw-------)
	I1202 12:30:56.817409   45666 main.go:141] libmachine: (test-preload-404682) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/test-preload-404682/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 12:30:56.817421   45666 main.go:141] libmachine: (test-preload-404682) DBG | About to run SSH command:
	I1202 12:30:56.817435   45666 main.go:141] libmachine: (test-preload-404682) DBG | exit 0
	I1202 12:30:56.943900   45666 main.go:141] libmachine: (test-preload-404682) DBG | SSH cmd err, output: <nil>: 
	I1202 12:30:56.944267   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetConfigRaw
	I1202 12:30:56.944878   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetIP
	I1202 12:30:56.946993   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:56.947305   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:56.947332   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:56.947594   45666 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/config.json ...
	I1202 12:30:56.947801   45666 machine.go:93] provisionDockerMachine start ...
	I1202 12:30:56.947817   45666 main.go:141] libmachine: (test-preload-404682) Calling .DriverName
	I1202 12:30:56.948021   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHHostname
	I1202 12:30:56.950459   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:56.950750   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:56.950779   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:56.950937   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHPort
	I1202 12:30:56.951117   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:56.951247   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:56.951372   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHUsername
	I1202 12:30:56.951504   45666 main.go:141] libmachine: Using SSH client type: native
	I1202 12:30:56.951755   45666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1202 12:30:56.951768   45666 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:30:57.064141   45666 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1202 12:30:57.064168   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetMachineName
	I1202 12:30:57.064410   45666 buildroot.go:166] provisioning hostname "test-preload-404682"
	I1202 12:30:57.064439   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetMachineName
	I1202 12:30:57.064625   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHHostname
	I1202 12:30:57.067069   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.067357   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:57.067387   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.067494   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHPort
	I1202 12:30:57.067638   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:57.067753   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:57.067852   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHUsername
	I1202 12:30:57.067955   45666 main.go:141] libmachine: Using SSH client type: native
	I1202 12:30:57.068119   45666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1202 12:30:57.068131   45666 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-404682 && echo "test-preload-404682" | sudo tee /etc/hostname
	I1202 12:30:57.194570   45666 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-404682
	
	I1202 12:30:57.194595   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHHostname
	I1202 12:30:57.197226   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.197579   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:57.197601   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.197776   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHPort
	I1202 12:30:57.197942   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:57.198081   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:57.198207   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHUsername
	I1202 12:30:57.198373   45666 main.go:141] libmachine: Using SSH client type: native
	I1202 12:30:57.198584   45666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1202 12:30:57.198615   45666 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-404682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-404682/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-404682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 12:30:57.316521   45666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:30:57.316549   45666 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 12:30:57.316565   45666 buildroot.go:174] setting up certificates
	I1202 12:30:57.316573   45666 provision.go:84] configureAuth start
	I1202 12:30:57.316581   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetMachineName
	I1202 12:30:57.316807   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetIP
	I1202 12:30:57.319302   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.319663   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:57.319690   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.319792   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHHostname
	I1202 12:30:57.321874   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.322179   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:57.322202   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.322308   45666 provision.go:143] copyHostCerts
	I1202 12:30:57.322361   45666 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 12:30:57.322379   45666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:30:57.322442   45666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 12:30:57.322525   45666 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 12:30:57.322533   45666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:30:57.322556   45666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 12:30:57.322607   45666 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 12:30:57.322613   45666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:30:57.322632   45666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 12:30:57.322678   45666 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.test-preload-404682 san=[127.0.0.1 192.168.39.206 localhost minikube test-preload-404682]
	I1202 12:30:57.515715   45666 provision.go:177] copyRemoteCerts
	I1202 12:30:57.515784   45666 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 12:30:57.515811   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHHostname
	I1202 12:30:57.518244   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.518592   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:57.518618   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.518782   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHPort
	I1202 12:30:57.518925   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:57.519065   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHUsername
	I1202 12:30:57.519198   45666 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/test-preload-404682/id_rsa Username:docker}
	I1202 12:30:57.605803   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 12:30:57.629304   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1202 12:30:57.652688   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 12:30:57.676093   45666 provision.go:87] duration metric: took 359.511819ms to configureAuth
	I1202 12:30:57.676116   45666 buildroot.go:189] setting minikube options for container-runtime
	I1202 12:30:57.676287   45666 config.go:182] Loaded profile config "test-preload-404682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1202 12:30:57.676379   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHHostname
	I1202 12:30:57.678900   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.679280   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:57.679309   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.679467   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHPort
	I1202 12:30:57.679622   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:57.679764   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:57.679904   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHUsername
	I1202 12:30:57.680046   45666 main.go:141] libmachine: Using SSH client type: native
	I1202 12:30:57.680257   45666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1202 12:30:57.680277   45666 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 12:30:57.908253   45666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 12:30:57.908283   45666 machine.go:96] duration metric: took 960.469491ms to provisionDockerMachine
	I1202 12:30:57.908303   45666 start.go:293] postStartSetup for "test-preload-404682" (driver="kvm2")
	I1202 12:30:57.908316   45666 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 12:30:57.908345   45666 main.go:141] libmachine: (test-preload-404682) Calling .DriverName
	I1202 12:30:57.908668   45666 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 12:30:57.908701   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHHostname
	I1202 12:30:57.911001   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.911384   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:57.911415   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:57.911551   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHPort
	I1202 12:30:57.911718   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:57.911859   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHUsername
	I1202 12:30:57.911951   45666 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/test-preload-404682/id_rsa Username:docker}
	I1202 12:30:57.998156   45666 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 12:30:58.002037   45666 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 12:30:58.002058   45666 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 12:30:58.002114   45666 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 12:30:58.002183   45666 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 12:30:58.002261   45666 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 12:30:58.011226   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:30:58.034297   45666 start.go:296] duration metric: took 125.982178ms for postStartSetup
	I1202 12:30:58.034328   45666 fix.go:56] duration metric: took 18.987742228s for fixHost
	I1202 12:30:58.034345   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHHostname
	I1202 12:30:58.036890   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:58.037193   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:58.037217   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:58.037349   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHPort
	I1202 12:30:58.037519   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:58.037684   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:58.037792   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHUsername
	I1202 12:30:58.037943   45666 main.go:141] libmachine: Using SSH client type: native
	I1202 12:30:58.038094   45666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1202 12:30:58.038104   45666 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 12:30:58.152669   45666 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733142658.124117206
	
	I1202 12:30:58.152687   45666 fix.go:216] guest clock: 1733142658.124117206
	I1202 12:30:58.152694   45666 fix.go:229] Guest: 2024-12-02 12:30:58.124117206 +0000 UTC Remote: 2024-12-02 12:30:58.034332264 +0000 UTC m=+23.757744428 (delta=89.784942ms)
	I1202 12:30:58.152726   45666 fix.go:200] guest clock delta is within tolerance: 89.784942ms
	I1202 12:30:58.152731   45666 start.go:83] releasing machines lock for "test-preload-404682", held for 19.106157252s
	I1202 12:30:58.152751   45666 main.go:141] libmachine: (test-preload-404682) Calling .DriverName
	I1202 12:30:58.152981   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetIP
	I1202 12:30:58.155341   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:58.155653   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:58.155680   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:58.155789   45666 main.go:141] libmachine: (test-preload-404682) Calling .DriverName
	I1202 12:30:58.156263   45666 main.go:141] libmachine: (test-preload-404682) Calling .DriverName
	I1202 12:30:58.156406   45666 main.go:141] libmachine: (test-preload-404682) Calling .DriverName
	I1202 12:30:58.156489   45666 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 12:30:58.156526   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHHostname
	I1202 12:30:58.156590   45666 ssh_runner.go:195] Run: cat /version.json
	I1202 12:30:58.156604   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHHostname
	I1202 12:30:58.158659   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:58.158912   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:58.158949   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:58.158974   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:58.159028   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHPort
	I1202 12:30:58.159203   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:58.159368   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHUsername
	I1202 12:30:58.159370   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:58.159394   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:58.159495   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHPort
	I1202 12:30:58.159586   45666 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/test-preload-404682/id_rsa Username:docker}
	I1202 12:30:58.159684   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:30:58.159811   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHUsername
	I1202 12:30:58.159963   45666 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/test-preload-404682/id_rsa Username:docker}
	I1202 12:30:58.264255   45666 ssh_runner.go:195] Run: systemctl --version
	I1202 12:30:58.269958   45666 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 12:30:58.410200   45666 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 12:30:58.416742   45666 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 12:30:58.416803   45666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 12:30:58.432422   45666 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 12:30:58.432438   45666 start.go:495] detecting cgroup driver to use...
	I1202 12:30:58.432491   45666 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 12:30:58.448634   45666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 12:30:58.462733   45666 docker.go:217] disabling cri-docker service (if available) ...
	I1202 12:30:58.462771   45666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 12:30:58.476010   45666 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 12:30:58.489350   45666 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 12:30:58.596780   45666 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 12:30:58.752931   45666 docker.go:233] disabling docker service ...
	I1202 12:30:58.753010   45666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 12:30:58.767220   45666 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 12:30:58.779465   45666 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 12:30:58.888718   45666 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 12:30:58.999628   45666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 12:30:59.012862   45666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 12:30:59.030406   45666 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1202 12:30:59.030453   45666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:30:59.040092   45666 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 12:30:59.040135   45666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:30:59.049980   45666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:30:59.059806   45666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:30:59.069606   45666 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 12:30:59.079594   45666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:30:59.089151   45666 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:30:59.105409   45666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:30:59.114945   45666 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 12:30:59.123870   45666 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 12:30:59.123919   45666 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 12:30:59.135877   45666 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 12:30:59.144700   45666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:30:59.257987   45666 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 12:30:59.344533   45666 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 12:30:59.344608   45666 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 12:30:59.349554   45666 start.go:563] Will wait 60s for crictl version
	I1202 12:30:59.349603   45666 ssh_runner.go:195] Run: which crictl
	I1202 12:30:59.353608   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 12:30:59.392921   45666 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 12:30:59.392985   45666 ssh_runner.go:195] Run: crio --version
	I1202 12:30:59.420171   45666 ssh_runner.go:195] Run: crio --version
	I1202 12:30:59.449651   45666 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1202 12:30:59.450687   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetIP
	I1202 12:30:59.452955   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:59.453238   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:30:59.453268   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:30:59.453437   45666 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 12:30:59.457375   45666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:30:59.469370   45666 kubeadm.go:883] updating cluster {Name:test-preload-404682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-404682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 12:30:59.469461   45666 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1202 12:30:59.469497   45666 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:30:59.502564   45666 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1202 12:30:59.502608   45666 ssh_runner.go:195] Run: which lz4
	I1202 12:30:59.506335   45666 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 12:30:59.510257   45666 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 12:30:59.510281   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1202 12:31:01.000333   45666 crio.go:462] duration metric: took 1.494020617s to copy over tarball
	I1202 12:31:01.000397   45666 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 12:31:03.297565   45666 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.297143089s)
	I1202 12:31:03.297591   45666 crio.go:469] duration metric: took 2.297231413s to extract the tarball
	I1202 12:31:03.297599   45666 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 12:31:03.339884   45666 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:31:03.384506   45666 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1202 12:31:03.384526   45666 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 12:31:03.384585   45666 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:31:03.384596   45666 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1202 12:31:03.384628   45666 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1202 12:31:03.384642   45666 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1202 12:31:03.384656   45666 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1202 12:31:03.384607   45666 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1202 12:31:03.384689   45666 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1202 12:31:03.384706   45666 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1202 12:31:03.385934   45666 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1202 12:31:03.386021   45666 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1202 12:31:03.386026   45666 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1202 12:31:03.385935   45666 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1202 12:31:03.386041   45666 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:31:03.385935   45666 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1202 12:31:03.386109   45666 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1202 12:31:03.386092   45666 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1202 12:31:03.537583   45666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1202 12:31:03.537697   45666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1202 12:31:03.546374   45666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1202 12:31:03.551769   45666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1202 12:31:03.552452   45666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1202 12:31:03.558910   45666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1202 12:31:03.585123   45666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1202 12:31:03.689791   45666 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1202 12:31:03.689839   45666 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1202 12:31:03.689856   45666 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1202 12:31:03.689876   45666 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1202 12:31:03.689917   45666 ssh_runner.go:195] Run: which crictl
	I1202 12:31:03.689921   45666 ssh_runner.go:195] Run: which crictl
	I1202 12:31:03.700933   45666 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1202 12:31:03.700972   45666 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1202 12:31:03.701009   45666 ssh_runner.go:195] Run: which crictl
	I1202 12:31:03.722666   45666 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1202 12:31:03.722703   45666 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1202 12:31:03.722736   45666 ssh_runner.go:195] Run: which crictl
	I1202 12:31:03.729597   45666 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1202 12:31:03.729630   45666 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1202 12:31:03.729657   45666 ssh_runner.go:195] Run: which crictl
	I1202 12:31:03.734064   45666 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1202 12:31:03.734088   45666 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1202 12:31:03.734119   45666 ssh_runner.go:195] Run: which crictl
	I1202 12:31:03.734121   45666 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1202 12:31:03.734140   45666 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1202 12:31:03.734182   45666 ssh_runner.go:195] Run: which crictl
	I1202 12:31:03.734241   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1202 12:31:03.734272   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1202 12:31:03.734296   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1202 12:31:03.736079   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1202 12:31:03.745179   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1202 12:31:03.757490   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1202 12:31:03.843459   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1202 12:31:03.862270   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1202 12:31:03.862302   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1202 12:31:03.862367   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1202 12:31:03.865237   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1202 12:31:03.895464   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1202 12:31:03.913897   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1202 12:31:03.967104   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1202 12:31:04.000707   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1202 12:31:04.000712   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1202 12:31:04.056864   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1202 12:31:04.057967   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1202 12:31:04.072739   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1202 12:31:04.072757   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1202 12:31:04.072812   45666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1202 12:31:04.072909   45666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1202 12:31:04.187412   45666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1202 12:31:04.187542   45666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1202 12:31:04.188875   45666 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1202 12:31:04.189029   45666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1202 12:31:04.189146   45666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1202 12:31:04.206739   45666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1202 12:31:04.206820   45666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1202 12:31:04.206878   45666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1202 12:31:04.206915   45666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1202 12:31:04.206945   45666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1202 12:31:04.206960   45666 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1202 12:31:04.206971   45666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1202 12:31:04.206995   45666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1202 12:31:04.206998   45666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1202 12:31:04.206997   45666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1202 12:31:04.208902   45666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1202 12:31:04.247294   45666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1202 12:31:04.247410   45666 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1202 12:31:04.247530   45666 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1202 12:31:04.298946   45666 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:31:07.690932   45666 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (3.483909013s)
	I1202 12:31:07.690976   45666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1202 12:31:07.691029   45666 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.484031932s)
	I1202 12:31:07.691041   45666 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.484015019s)
	I1202 12:31:07.691056   45666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1202 12:31:07.691061   45666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1202 12:31:07.691069   45666 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1202 12:31:07.691111   45666 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.443553696s)
	I1202 12:31:07.691148   45666 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1202 12:31:07.691128   45666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1202 12:31:07.691166   45666 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.392196528s)
	I1202 12:31:08.430929   45666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1202 12:31:08.430964   45666 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1202 12:31:08.431014   45666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1202 12:31:10.575759   45666 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.144722224s)
	I1202 12:31:10.575788   45666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1202 12:31:10.575800   45666 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1202 12:31:10.575846   45666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1202 12:31:11.329526   45666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1202 12:31:11.329556   45666 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1202 12:31:11.329609   45666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1202 12:31:11.669638   45666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1202 12:31:11.669663   45666 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1202 12:31:11.669710   45666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1202 12:31:11.820291   45666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1202 12:31:11.820318   45666 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1202 12:31:11.820366   45666 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1202 12:31:12.671452   45666 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1202 12:31:12.671493   45666 cache_images.go:123] Successfully loaded all cached images
	I1202 12:31:12.671499   45666 cache_images.go:92] duration metric: took 9.286962478s to LoadCachedImages
	I1202 12:31:12.671515   45666 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.24.4 crio true true} ...
	I1202 12:31:12.671616   45666 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-404682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-404682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 12:31:12.671714   45666 ssh_runner.go:195] Run: crio config
	I1202 12:31:12.715842   45666 cni.go:84] Creating CNI manager for ""
	I1202 12:31:12.715861   45666 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:31:12.715869   45666 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 12:31:12.715886   45666 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-404682 NodeName:test-preload-404682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 12:31:12.716043   45666 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-404682"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 12:31:12.716114   45666 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1202 12:31:12.726358   45666 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 12:31:12.726419   45666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 12:31:12.735940   45666 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1202 12:31:12.751910   45666 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 12:31:12.767749   45666 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1202 12:31:12.784093   45666 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1202 12:31:12.787861   45666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:31:12.799814   45666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:31:12.919657   45666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:31:12.940776   45666 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682 for IP: 192.168.39.206
	I1202 12:31:12.940800   45666 certs.go:194] generating shared ca certs ...
	I1202 12:31:12.940820   45666 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:31:12.941008   45666 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 12:31:12.941067   45666 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 12:31:12.941082   45666 certs.go:256] generating profile certs ...
	I1202 12:31:12.941197   45666 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/client.key
	I1202 12:31:12.941273   45666 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/apiserver.key.fca3dab9
	I1202 12:31:12.941327   45666 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/proxy-client.key
	I1202 12:31:12.941508   45666 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 12:31:12.941552   45666 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 12:31:12.941567   45666 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 12:31:12.941656   45666 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 12:31:12.941710   45666 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 12:31:12.941741   45666 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 12:31:12.941802   45666 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:31:12.942721   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 12:31:12.969271   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 12:31:13.000711   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 12:31:13.034462   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 12:31:13.062386   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1202 12:31:13.095384   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 12:31:13.130908   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 12:31:13.164532   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 12:31:13.189706   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 12:31:13.212080   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 12:31:13.234362   45666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 12:31:13.256590   45666 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 12:31:13.272498   45666 ssh_runner.go:195] Run: openssl version
	I1202 12:31:13.277821   45666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 12:31:13.288000   45666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 12:31:13.292201   45666 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:31:13.292246   45666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 12:31:13.297725   45666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 12:31:13.307685   45666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 12:31:13.317749   45666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 12:31:13.322027   45666 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:31:13.322071   45666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 12:31:13.327888   45666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 12:31:13.338142   45666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 12:31:13.348280   45666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:31:13.352629   45666 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:31:13.352666   45666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:31:13.357898   45666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 12:31:13.367851   45666 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:31:13.372170   45666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 12:31:13.377815   45666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 12:31:13.383267   45666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 12:31:13.388804   45666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 12:31:13.394159   45666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 12:31:13.399668   45666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 12:31:13.405071   45666 kubeadm.go:392] StartCluster: {Name:test-preload-404682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-404682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:31:13.405158   45666 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 12:31:13.405192   45666 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:31:13.441493   45666 cri.go:89] found id: ""
	I1202 12:31:13.441535   45666 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 12:31:13.451172   45666 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1202 12:31:13.451191   45666 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1202 12:31:13.451264   45666 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 12:31:13.460446   45666 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 12:31:13.460880   45666 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-404682" does not appear in /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:31:13.460979   45666 kubeconfig.go:62] /home/jenkins/minikube-integration/20033-6257/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-404682" cluster setting kubeconfig missing "test-preload-404682" context setting]
	I1202 12:31:13.461240   45666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:31:13.461876   45666 kapi.go:59] client config for test-preload-404682: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 12:31:13.462396   45666 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 12:31:13.471280   45666 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.206
	I1202 12:31:13.471310   45666 kubeadm.go:1160] stopping kube-system containers ...
	I1202 12:31:13.471326   45666 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 12:31:13.471374   45666 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:31:13.513151   45666 cri.go:89] found id: ""
	I1202 12:31:13.513212   45666 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 12:31:13.528916   45666 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:31:13.539060   45666 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:31:13.539079   45666 kubeadm.go:157] found existing configuration files:
	
	I1202 12:31:13.539121   45666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:31:13.547798   45666 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:31:13.547852   45666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:31:13.556745   45666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:31:13.566265   45666 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:31:13.566310   45666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:31:13.575941   45666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:31:13.585149   45666 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:31:13.585184   45666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:31:13.594936   45666 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:31:13.603430   45666 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:31:13.603471   45666 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:31:13.612254   45666 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:31:13.621096   45666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:31:13.720830   45666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:31:14.604264   45666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:31:14.869184   45666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:31:14.932426   45666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:31:15.010412   45666 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:31:15.010502   45666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:31:15.511313   45666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:31:16.010926   45666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:31:16.036679   45666 api_server.go:72] duration metric: took 1.026287561s to wait for apiserver process to appear ...
	I1202 12:31:16.036705   45666 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:31:16.036727   45666 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1202 12:31:16.037185   45666 api_server.go:269] stopped: https://192.168.39.206:8443/healthz: Get "https://192.168.39.206:8443/healthz": dial tcp 192.168.39.206:8443: connect: connection refused
	I1202 12:31:16.537405   45666 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1202 12:31:16.537962   45666 api_server.go:269] stopped: https://192.168.39.206:8443/healthz: Get "https://192.168.39.206:8443/healthz": dial tcp 192.168.39.206:8443: connect: connection refused
	I1202 12:31:17.037391   45666 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1202 12:31:20.426949   45666 api_server.go:279] https://192.168.39.206:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 12:31:20.426978   45666 api_server.go:103] status: https://192.168.39.206:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 12:31:20.426996   45666 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1202 12:31:20.461436   45666 api_server.go:279] https://192.168.39.206:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 12:31:20.461457   45666 api_server.go:103] status: https://192.168.39.206:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 12:31:20.537655   45666 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1202 12:31:20.552358   45666 api_server.go:279] https://192.168.39.206:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:31:20.552391   45666 api_server.go:103] status: https://192.168.39.206:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:31:21.036880   45666 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1202 12:31:21.043033   45666 api_server.go:279] https://192.168.39.206:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:31:21.043053   45666 api_server.go:103] status: https://192.168.39.206:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:31:21.537425   45666 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1202 12:31:21.544808   45666 api_server.go:279] https://192.168.39.206:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:31:21.544827   45666 api_server.go:103] status: https://192.168.39.206:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:31:22.037470   45666 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1202 12:31:22.042140   45666 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1202 12:31:22.048154   45666 api_server.go:141] control plane version: v1.24.4
	I1202 12:31:22.048176   45666 api_server.go:131] duration metric: took 6.011464328s to wait for apiserver health ...
	I1202 12:31:22.048184   45666 cni.go:84] Creating CNI manager for ""
	I1202 12:31:22.048192   45666 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:31:22.049916   45666 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 12:31:22.051262   45666 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 12:31:22.062315   45666 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 12:31:22.082730   45666 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:31:22.082817   45666 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 12:31:22.082843   45666 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 12:31:22.091211   45666 system_pods.go:59] 7 kube-system pods found
	I1202 12:31:22.091231   45666 system_pods.go:61] "coredns-6d4b75cb6d-99ff8" [fd9bc41f-9a2f-43d9-9296-e5a1d03ff222] Running
	I1202 12:31:22.091236   45666 system_pods.go:61] "etcd-test-preload-404682" [e3ee3829-8e44-4aa8-8b8f-8df1014b5eef] Running
	I1202 12:31:22.091240   45666 system_pods.go:61] "kube-apiserver-test-preload-404682" [634938fa-0ae8-4155-b6be-3123f14e938e] Running
	I1202 12:31:22.091244   45666 system_pods.go:61] "kube-controller-manager-test-preload-404682" [4df588db-ee02-4ff2-8aac-ee8bf2c54ebd] Running
	I1202 12:31:22.091251   45666 system_pods.go:61] "kube-proxy-d2k9g" [b06e0b45-03ae-4ee1-9042-13504452ad66] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 12:31:22.091260   45666 system_pods.go:61] "kube-scheduler-test-preload-404682" [b109fc23-80e1-4ffe-8ba9-599841863746] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 12:31:22.091278   45666 system_pods.go:61] "storage-provisioner" [98eefa9f-2810-4028-b81b-536da9ce44d0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 12:31:22.091287   45666 system_pods.go:74] duration metric: took 8.539339ms to wait for pod list to return data ...
	I1202 12:31:22.091299   45666 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:31:22.094211   45666 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:31:22.094239   45666 node_conditions.go:123] node cpu capacity is 2
	I1202 12:31:22.094253   45666 node_conditions.go:105] duration metric: took 2.948614ms to run NodePressure ...
	I1202 12:31:22.094269   45666 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:31:22.286901   45666 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1202 12:31:22.291045   45666 kubeadm.go:739] kubelet initialised
	I1202 12:31:22.291062   45666 kubeadm.go:740] duration metric: took 4.141034ms waiting for restarted kubelet to initialise ...
	I1202 12:31:22.291068   45666 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:31:22.297455   45666 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-99ff8" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:22.301940   45666 pod_ready.go:98] node "test-preload-404682" hosting pod "coredns-6d4b75cb6d-99ff8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:22.301973   45666 pod_ready.go:82] duration metric: took 4.498113ms for pod "coredns-6d4b75cb6d-99ff8" in "kube-system" namespace to be "Ready" ...
	E1202 12:31:22.301984   45666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-404682" hosting pod "coredns-6d4b75cb6d-99ff8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:22.301996   45666 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:22.306266   45666 pod_ready.go:98] node "test-preload-404682" hosting pod "etcd-test-preload-404682" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:22.306291   45666 pod_ready.go:82] duration metric: took 4.284675ms for pod "etcd-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	E1202 12:31:22.306301   45666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-404682" hosting pod "etcd-test-preload-404682" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:22.306309   45666 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:22.322089   45666 pod_ready.go:98] node "test-preload-404682" hosting pod "kube-apiserver-test-preload-404682" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:22.322104   45666 pod_ready.go:82] duration metric: took 15.782256ms for pod "kube-apiserver-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	E1202 12:31:22.322111   45666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-404682" hosting pod "kube-apiserver-test-preload-404682" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:22.322133   45666 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:22.486435   45666 pod_ready.go:98] node "test-preload-404682" hosting pod "kube-controller-manager-test-preload-404682" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:22.486466   45666 pod_ready.go:82] duration metric: took 164.311189ms for pod "kube-controller-manager-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	E1202 12:31:22.486475   45666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-404682" hosting pod "kube-controller-manager-test-preload-404682" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:22.486489   45666 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d2k9g" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:22.886574   45666 pod_ready.go:98] node "test-preload-404682" hosting pod "kube-proxy-d2k9g" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:22.886604   45666 pod_ready.go:82] duration metric: took 400.104118ms for pod "kube-proxy-d2k9g" in "kube-system" namespace to be "Ready" ...
	E1202 12:31:22.886616   45666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-404682" hosting pod "kube-proxy-d2k9g" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:22.886625   45666 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:23.286577   45666 pod_ready.go:98] node "test-preload-404682" hosting pod "kube-scheduler-test-preload-404682" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:23.286608   45666 pod_ready.go:82] duration metric: took 399.975797ms for pod "kube-scheduler-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	E1202 12:31:23.286622   45666 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-404682" hosting pod "kube-scheduler-test-preload-404682" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:23.286629   45666 pod_ready.go:39] duration metric: took 995.552422ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:31:23.286645   45666 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 12:31:23.299157   45666 ops.go:34] apiserver oom_adj: -16
	I1202 12:31:23.299177   45666 kubeadm.go:597] duration metric: took 9.847979363s to restartPrimaryControlPlane
	I1202 12:31:23.299187   45666 kubeadm.go:394] duration metric: took 9.89411963s to StartCluster
	I1202 12:31:23.299205   45666 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:31:23.299280   45666 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:31:23.299972   45666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:31:23.300217   45666 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 12:31:23.300272   45666 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 12:31:23.300375   45666 addons.go:69] Setting storage-provisioner=true in profile "test-preload-404682"
	I1202 12:31:23.300400   45666 addons.go:69] Setting default-storageclass=true in profile "test-preload-404682"
	I1202 12:31:23.300428   45666 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-404682"
	I1202 12:31:23.300456   45666 config.go:182] Loaded profile config "test-preload-404682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1202 12:31:23.300405   45666 addons.go:234] Setting addon storage-provisioner=true in "test-preload-404682"
	W1202 12:31:23.300519   45666 addons.go:243] addon storage-provisioner should already be in state true
	I1202 12:31:23.300552   45666 host.go:66] Checking if "test-preload-404682" exists ...
	I1202 12:31:23.300824   45666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:31:23.300857   45666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:31:23.300923   45666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:31:23.300964   45666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:31:23.302617   45666 out.go:177] * Verifying Kubernetes components...
	I1202 12:31:23.303830   45666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:31:23.315340   45666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46535
	I1202 12:31:23.315367   45666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44791
	I1202 12:31:23.315814   45666 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:31:23.315815   45666 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:31:23.316330   45666 main.go:141] libmachine: Using API Version  1
	I1202 12:31:23.316351   45666 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:31:23.316480   45666 main.go:141] libmachine: Using API Version  1
	I1202 12:31:23.316506   45666 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:31:23.316756   45666 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:31:23.316856   45666 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:31:23.317017   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetState
	I1202 12:31:23.317290   45666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:31:23.317325   45666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:31:23.319258   45666 kapi.go:59] client config for test-preload-404682: &rest.Config{Host:"https://192.168.39.206:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/profiles/test-preload-404682/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 12:31:23.319503   45666 addons.go:234] Setting addon default-storageclass=true in "test-preload-404682"
	W1202 12:31:23.319516   45666 addons.go:243] addon default-storageclass should already be in state true
	I1202 12:31:23.319539   45666 host.go:66] Checking if "test-preload-404682" exists ...
	I1202 12:31:23.319806   45666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:31:23.319839   45666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:31:23.331398   45666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43257
	I1202 12:31:23.331820   45666 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:31:23.332316   45666 main.go:141] libmachine: Using API Version  1
	I1202 12:31:23.332339   45666 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:31:23.332690   45666 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:31:23.332844   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetState
	I1202 12:31:23.333458   45666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I1202 12:31:23.333792   45666 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:31:23.334267   45666 main.go:141] libmachine: Using API Version  1
	I1202 12:31:23.334296   45666 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:31:23.334458   45666 main.go:141] libmachine: (test-preload-404682) Calling .DriverName
	I1202 12:31:23.334611   45666 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:31:23.335142   45666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:31:23.335183   45666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:31:23.336265   45666 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:31:23.337693   45666 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 12:31:23.337708   45666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 12:31:23.337720   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHHostname
	I1202 12:31:23.340913   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:31:23.341335   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:31:23.341355   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:31:23.341521   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHPort
	I1202 12:31:23.341675   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:31:23.341812   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHUsername
	I1202 12:31:23.341943   45666 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/test-preload-404682/id_rsa Username:docker}
	I1202 12:31:23.370102   45666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43187
	I1202 12:31:23.370507   45666 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:31:23.371043   45666 main.go:141] libmachine: Using API Version  1
	I1202 12:31:23.371070   45666 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:31:23.371449   45666 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:31:23.371656   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetState
	I1202 12:31:23.373126   45666 main.go:141] libmachine: (test-preload-404682) Calling .DriverName
	I1202 12:31:23.373399   45666 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 12:31:23.373419   45666 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 12:31:23.373437   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHHostname
	I1202 12:31:23.375964   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:31:23.376390   45666 main.go:141] libmachine: (test-preload-404682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:26:5c", ip: ""} in network mk-test-preload-404682: {Iface:virbr1 ExpiryTime:2024-12-02 13:30:50 +0000 UTC Type:0 Mac:52:54:00:a4:26:5c Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:test-preload-404682 Clientid:01:52:54:00:a4:26:5c}
	I1202 12:31:23.376414   45666 main.go:141] libmachine: (test-preload-404682) DBG | domain test-preload-404682 has defined IP address 192.168.39.206 and MAC address 52:54:00:a4:26:5c in network mk-test-preload-404682
	I1202 12:31:23.376575   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHPort
	I1202 12:31:23.376815   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHKeyPath
	I1202 12:31:23.376991   45666 main.go:141] libmachine: (test-preload-404682) Calling .GetSSHUsername
	I1202 12:31:23.377146   45666 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/test-preload-404682/id_rsa Username:docker}
	I1202 12:31:23.485026   45666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:31:23.504217   45666 node_ready.go:35] waiting up to 6m0s for node "test-preload-404682" to be "Ready" ...
	I1202 12:31:23.598744   45666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 12:31:23.618735   45666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 12:31:24.592380   45666 main.go:141] libmachine: Making call to close driver server
	I1202 12:31:24.592418   45666 main.go:141] libmachine: (test-preload-404682) Calling .Close
	I1202 12:31:24.592469   45666 main.go:141] libmachine: Making call to close driver server
	I1202 12:31:24.592488   45666 main.go:141] libmachine: (test-preload-404682) Calling .Close
	I1202 12:31:24.592680   45666 main.go:141] libmachine: (test-preload-404682) DBG | Closing plugin on server side
	I1202 12:31:24.592716   45666 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:31:24.592729   45666 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:31:24.592741   45666 main.go:141] libmachine: Making call to close driver server
	I1202 12:31:24.592738   45666 main.go:141] libmachine: (test-preload-404682) DBG | Closing plugin on server side
	I1202 12:31:24.592761   45666 main.go:141] libmachine: (test-preload-404682) Calling .Close
	I1202 12:31:24.592715   45666 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:31:24.592808   45666 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:31:24.592816   45666 main.go:141] libmachine: Making call to close driver server
	I1202 12:31:24.592826   45666 main.go:141] libmachine: (test-preload-404682) Calling .Close
	I1202 12:31:24.592969   45666 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:31:24.592983   45666 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:31:24.593048   45666 main.go:141] libmachine: (test-preload-404682) DBG | Closing plugin on server side
	I1202 12:31:24.593058   45666 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:31:24.593069   45666 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:31:24.598746   45666 main.go:141] libmachine: Making call to close driver server
	I1202 12:31:24.598761   45666 main.go:141] libmachine: (test-preload-404682) Calling .Close
	I1202 12:31:24.598969   45666 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:31:24.599010   45666 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:31:24.598990   45666 main.go:141] libmachine: (test-preload-404682) DBG | Closing plugin on server side
	I1202 12:31:24.601106   45666 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1202 12:31:24.602307   45666 addons.go:510] duration metric: took 1.302041978s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 12:31:25.507328   45666 node_ready.go:53] node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:27.508175   45666 node_ready.go:53] node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:29.508755   45666 node_ready.go:53] node "test-preload-404682" has status "Ready":"False"
	I1202 12:31:31.007732   45666 node_ready.go:49] node "test-preload-404682" has status "Ready":"True"
	I1202 12:31:31.007759   45666 node_ready.go:38] duration metric: took 7.50350741s for node "test-preload-404682" to be "Ready" ...
	I1202 12:31:31.007770   45666 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:31:31.012649   45666 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-99ff8" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:31.017038   45666 pod_ready.go:93] pod "coredns-6d4b75cb6d-99ff8" in "kube-system" namespace has status "Ready":"True"
	I1202 12:31:31.017059   45666 pod_ready.go:82] duration metric: took 4.390853ms for pod "coredns-6d4b75cb6d-99ff8" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:31.017069   45666 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:32.522933   45666 pod_ready.go:93] pod "etcd-test-preload-404682" in "kube-system" namespace has status "Ready":"True"
	I1202 12:31:32.522955   45666 pod_ready.go:82] duration metric: took 1.505878137s for pod "etcd-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:32.522967   45666 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:32.526957   45666 pod_ready.go:93] pod "kube-apiserver-test-preload-404682" in "kube-system" namespace has status "Ready":"True"
	I1202 12:31:32.526974   45666 pod_ready.go:82] duration metric: took 3.999249ms for pod "kube-apiserver-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:32.526982   45666 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:34.532731   45666 pod_ready.go:103] pod "kube-controller-manager-test-preload-404682" in "kube-system" namespace has status "Ready":"False"
	I1202 12:31:35.034001   45666 pod_ready.go:93] pod "kube-controller-manager-test-preload-404682" in "kube-system" namespace has status "Ready":"True"
	I1202 12:31:35.034025   45666 pod_ready.go:82] duration metric: took 2.507035979s for pod "kube-controller-manager-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:35.034037   45666 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d2k9g" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:35.038705   45666 pod_ready.go:93] pod "kube-proxy-d2k9g" in "kube-system" namespace has status "Ready":"True"
	I1202 12:31:35.038723   45666 pod_ready.go:82] duration metric: took 4.677744ms for pod "kube-proxy-d2k9g" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:35.038741   45666 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:35.045538   45666 pod_ready.go:93] pod "kube-scheduler-test-preload-404682" in "kube-system" namespace has status "Ready":"True"
	I1202 12:31:35.045552   45666 pod_ready.go:82] duration metric: took 6.804302ms for pod "kube-scheduler-test-preload-404682" in "kube-system" namespace to be "Ready" ...
	I1202 12:31:35.045560   45666 pod_ready.go:39] duration metric: took 4.037779147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:31:35.045573   45666 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:31:35.045617   45666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:31:35.061989   45666 api_server.go:72] duration metric: took 11.761723834s to wait for apiserver process to appear ...
	I1202 12:31:35.062010   45666 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:31:35.062027   45666 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1202 12:31:35.067608   45666 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1202 12:31:35.068413   45666 api_server.go:141] control plane version: v1.24.4
	I1202 12:31:35.068433   45666 api_server.go:131] duration metric: took 6.416203ms to wait for apiserver health ...
	I1202 12:31:35.068441   45666 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:31:35.210914   45666 system_pods.go:59] 7 kube-system pods found
	I1202 12:31:35.210940   45666 system_pods.go:61] "coredns-6d4b75cb6d-99ff8" [fd9bc41f-9a2f-43d9-9296-e5a1d03ff222] Running
	I1202 12:31:35.210945   45666 system_pods.go:61] "etcd-test-preload-404682" [e3ee3829-8e44-4aa8-8b8f-8df1014b5eef] Running
	I1202 12:31:35.210948   45666 system_pods.go:61] "kube-apiserver-test-preload-404682" [634938fa-0ae8-4155-b6be-3123f14e938e] Running
	I1202 12:31:35.210952   45666 system_pods.go:61] "kube-controller-manager-test-preload-404682" [4df588db-ee02-4ff2-8aac-ee8bf2c54ebd] Running
	I1202 12:31:35.210954   45666 system_pods.go:61] "kube-proxy-d2k9g" [b06e0b45-03ae-4ee1-9042-13504452ad66] Running
	I1202 12:31:35.210957   45666 system_pods.go:61] "kube-scheduler-test-preload-404682" [b109fc23-80e1-4ffe-8ba9-599841863746] Running
	I1202 12:31:35.210960   45666 system_pods.go:61] "storage-provisioner" [98eefa9f-2810-4028-b81b-536da9ce44d0] Running
	I1202 12:31:35.210968   45666 system_pods.go:74] duration metric: took 142.51932ms to wait for pod list to return data ...
	I1202 12:31:35.210976   45666 default_sa.go:34] waiting for default service account to be created ...
	I1202 12:31:35.408701   45666 default_sa.go:45] found service account: "default"
	I1202 12:31:35.408722   45666 default_sa.go:55] duration metric: took 197.74031ms for default service account to be created ...
	I1202 12:31:35.408730   45666 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 12:31:35.610155   45666 system_pods.go:86] 7 kube-system pods found
	I1202 12:31:35.610183   45666 system_pods.go:89] "coredns-6d4b75cb6d-99ff8" [fd9bc41f-9a2f-43d9-9296-e5a1d03ff222] Running
	I1202 12:31:35.610189   45666 system_pods.go:89] "etcd-test-preload-404682" [e3ee3829-8e44-4aa8-8b8f-8df1014b5eef] Running
	I1202 12:31:35.610198   45666 system_pods.go:89] "kube-apiserver-test-preload-404682" [634938fa-0ae8-4155-b6be-3123f14e938e] Running
	I1202 12:31:35.610202   45666 system_pods.go:89] "kube-controller-manager-test-preload-404682" [4df588db-ee02-4ff2-8aac-ee8bf2c54ebd] Running
	I1202 12:31:35.610205   45666 system_pods.go:89] "kube-proxy-d2k9g" [b06e0b45-03ae-4ee1-9042-13504452ad66] Running
	I1202 12:31:35.610208   45666 system_pods.go:89] "kube-scheduler-test-preload-404682" [b109fc23-80e1-4ffe-8ba9-599841863746] Running
	I1202 12:31:35.610211   45666 system_pods.go:89] "storage-provisioner" [98eefa9f-2810-4028-b81b-536da9ce44d0] Running
	I1202 12:31:35.610217   45666 system_pods.go:126] duration metric: took 201.482495ms to wait for k8s-apps to be running ...
	I1202 12:31:35.610223   45666 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 12:31:35.610261   45666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:31:35.625633   45666 system_svc.go:56] duration metric: took 15.405083ms WaitForService to wait for kubelet
	I1202 12:31:35.625656   45666 kubeadm.go:582] duration metric: took 12.325393029s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:31:35.625670   45666 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:31:35.808790   45666 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:31:35.808813   45666 node_conditions.go:123] node cpu capacity is 2
	I1202 12:31:35.808821   45666 node_conditions.go:105] duration metric: took 183.147315ms to run NodePressure ...
	I1202 12:31:35.808831   45666 start.go:241] waiting for startup goroutines ...
	I1202 12:31:35.808837   45666 start.go:246] waiting for cluster config update ...
	I1202 12:31:35.808847   45666 start.go:255] writing updated cluster config ...
	I1202 12:31:35.809098   45666 ssh_runner.go:195] Run: rm -f paused
	I1202 12:31:35.854303   45666 start.go:600] kubectl: 1.31.3, cluster: 1.24.4 (minor skew: 7)
	I1202 12:31:35.856137   45666 out.go:201] 
	W1202 12:31:35.857517   45666 out.go:270] ! /usr/local/bin/kubectl is version 1.31.3, which may have incompatibilities with Kubernetes 1.24.4.
	I1202 12:31:35.858641   45666 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1202 12:31:35.859800   45666 out.go:177] * Done! kubectl is now configured to use "test-preload-404682" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.723161797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733142696723143440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f88721e-5a6f-4d80-9573-2b0c16730c73 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.723728519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cedb42b-42c6-4ec5-b5c9-8e7b5286d449 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.723799577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cedb42b-42c6-4ec5-b5c9-8e7b5286d449 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.723945057Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e5364583f01313c780eea2bfed594be1a07d636ce775e596c6b6d8a78d22944,PodSandboxId:882677737d3e2b86a2b7df99801a1bae58ab7ccfe42972c9ca96b1c78065cb96,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733142689512575637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-99ff8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9bc41f-9a2f-43d9-9296-e5a1d03ff222,},Annotations:map[string]string{io.kubernetes.container.hash: 7484b187,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1587657a8101116e1acb57ab9b642dc89b02b9f29f181c24dae1ed3de93fa34b,PodSandboxId:c947954c442f4aa58b76e3a9e7f80ce0c5a683478ee90c70d502b431089dde1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733142682671071635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2k9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b06e0b45-03ae-4ee1-9042-13504452ad66,},Annotations:map[string]string{io.kubernetes.container.hash: 7d828d1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b39ef8112e71fc5ee0e157ffd904b366e7ba8ba27d9d4d0c574267f21da2ad5,PodSandboxId:a6145ffa8fc121fa2a31ad21f38e9254d36b73a94743b27f88c324fedd5b37da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733142682353360512,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98
eefa9f-2810-4028-b81b-536da9ce44d0,},Annotations:map[string]string{io.kubernetes.container.hash: 44c660dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cdfedcd4747e3603e741b8e3e76670b6eaa09c15f03afe311757ec85278154b,PodSandboxId:2b00c678fcfd31c7b115411459dd0eea11978f90fd2e60334b3679cb2e759877,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733142675787805222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 521a041db53d9ce9e53c091c91741320,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5536104bb57b979b6269bcd5b9f4641bf57ded18e63c80655ef401f1856c9a,PodSandboxId:711507ff939cfbdbb285e85486f23896e86435c328c162540b8ab5c6b6602b98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733142675738110809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2324dcc611c9159a4851914ed54699ac,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a708ebc492fd87d5d5633e52317e79cd32dd7b3a8851e434036d1a5766db1adc,PodSandboxId:8bee0e3b8bdcb8a77ab2f47b7a3c3b4a9bcd070434c8894ab45b2c58015777f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733142675771800371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c523ea67621368be4a17027a371233e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 455d72d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e68bf8a14517ce2049d5c28fe5b9c5f96cde72a72f626fd2d98c25db182dbf35,PodSandboxId:f47975e69441165864ec545df67f1a329f576e97201277fbd05b09abbf9f9e66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733142675709071249,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703fc2cb89a57467474e4f73926664ed,},Annotation
s:map[string]string{io.kubernetes.container.hash: 41943a37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9cedb42b-42c6-4ec5-b5c9-8e7b5286d449 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.756661193Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c9471ee-69a2-4ef1-a8b7-57c88c31e89f name=/runtime.v1.RuntimeService/Version
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.756738701Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c9471ee-69a2-4ef1-a8b7-57c88c31e89f name=/runtime.v1.RuntimeService/Version
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.757605402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35fe9721-1b53-4083-881e-df0f2e242180 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.758060372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733142696758042315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35fe9721-1b53-4083-881e-df0f2e242180 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.758617314Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bec1f8db-9f26-4ded-994a-b2f0758bb986 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.758679687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bec1f8db-9f26-4ded-994a-b2f0758bb986 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.758837729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e5364583f01313c780eea2bfed594be1a07d636ce775e596c6b6d8a78d22944,PodSandboxId:882677737d3e2b86a2b7df99801a1bae58ab7ccfe42972c9ca96b1c78065cb96,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733142689512575637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-99ff8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9bc41f-9a2f-43d9-9296-e5a1d03ff222,},Annotations:map[string]string{io.kubernetes.container.hash: 7484b187,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1587657a8101116e1acb57ab9b642dc89b02b9f29f181c24dae1ed3de93fa34b,PodSandboxId:c947954c442f4aa58b76e3a9e7f80ce0c5a683478ee90c70d502b431089dde1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733142682671071635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2k9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b06e0b45-03ae-4ee1-9042-13504452ad66,},Annotations:map[string]string{io.kubernetes.container.hash: 7d828d1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b39ef8112e71fc5ee0e157ffd904b366e7ba8ba27d9d4d0c574267f21da2ad5,PodSandboxId:a6145ffa8fc121fa2a31ad21f38e9254d36b73a94743b27f88c324fedd5b37da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733142682353360512,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98
eefa9f-2810-4028-b81b-536da9ce44d0,},Annotations:map[string]string{io.kubernetes.container.hash: 44c660dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cdfedcd4747e3603e741b8e3e76670b6eaa09c15f03afe311757ec85278154b,PodSandboxId:2b00c678fcfd31c7b115411459dd0eea11978f90fd2e60334b3679cb2e759877,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733142675787805222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 521a041db53d9ce9e53c091c91741320,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5536104bb57b979b6269bcd5b9f4641bf57ded18e63c80655ef401f1856c9a,PodSandboxId:711507ff939cfbdbb285e85486f23896e86435c328c162540b8ab5c6b6602b98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733142675738110809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2324dcc611c9159a4851914ed54699ac,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a708ebc492fd87d5d5633e52317e79cd32dd7b3a8851e434036d1a5766db1adc,PodSandboxId:8bee0e3b8bdcb8a77ab2f47b7a3c3b4a9bcd070434c8894ab45b2c58015777f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733142675771800371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c523ea67621368be4a17027a371233e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 455d72d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e68bf8a14517ce2049d5c28fe5b9c5f96cde72a72f626fd2d98c25db182dbf35,PodSandboxId:f47975e69441165864ec545df67f1a329f576e97201277fbd05b09abbf9f9e66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733142675709071249,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703fc2cb89a57467474e4f73926664ed,},Annotation
s:map[string]string{io.kubernetes.container.hash: 41943a37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bec1f8db-9f26-4ded-994a-b2f0758bb986 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.792243385Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef4a8d5c-f18d-4c78-bd1c-0bacf078dfa6 name=/runtime.v1.RuntimeService/Version
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.792298108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef4a8d5c-f18d-4c78-bd1c-0bacf078dfa6 name=/runtime.v1.RuntimeService/Version
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.793236236Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aecc2bfb-32e5-48fd-b9b1-d24813713112 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.793691817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733142696793673203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aecc2bfb-32e5-48fd-b9b1-d24813713112 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.794296150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59ec59dd-0600-4a13-a9a2-95bada8960f9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.794362178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59ec59dd-0600-4a13-a9a2-95bada8960f9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.794570582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e5364583f01313c780eea2bfed594be1a07d636ce775e596c6b6d8a78d22944,PodSandboxId:882677737d3e2b86a2b7df99801a1bae58ab7ccfe42972c9ca96b1c78065cb96,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733142689512575637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-99ff8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9bc41f-9a2f-43d9-9296-e5a1d03ff222,},Annotations:map[string]string{io.kubernetes.container.hash: 7484b187,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1587657a8101116e1acb57ab9b642dc89b02b9f29f181c24dae1ed3de93fa34b,PodSandboxId:c947954c442f4aa58b76e3a9e7f80ce0c5a683478ee90c70d502b431089dde1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733142682671071635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2k9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b06e0b45-03ae-4ee1-9042-13504452ad66,},Annotations:map[string]string{io.kubernetes.container.hash: 7d828d1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b39ef8112e71fc5ee0e157ffd904b366e7ba8ba27d9d4d0c574267f21da2ad5,PodSandboxId:a6145ffa8fc121fa2a31ad21f38e9254d36b73a94743b27f88c324fedd5b37da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733142682353360512,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98
eefa9f-2810-4028-b81b-536da9ce44d0,},Annotations:map[string]string{io.kubernetes.container.hash: 44c660dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cdfedcd4747e3603e741b8e3e76670b6eaa09c15f03afe311757ec85278154b,PodSandboxId:2b00c678fcfd31c7b115411459dd0eea11978f90fd2e60334b3679cb2e759877,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733142675787805222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 521a041db53d9ce9e53c091c91741320,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5536104bb57b979b6269bcd5b9f4641bf57ded18e63c80655ef401f1856c9a,PodSandboxId:711507ff939cfbdbb285e85486f23896e86435c328c162540b8ab5c6b6602b98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733142675738110809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2324dcc611c9159a4851914ed54699ac,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a708ebc492fd87d5d5633e52317e79cd32dd7b3a8851e434036d1a5766db1adc,PodSandboxId:8bee0e3b8bdcb8a77ab2f47b7a3c3b4a9bcd070434c8894ab45b2c58015777f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733142675771800371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c523ea67621368be4a17027a371233e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 455d72d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e68bf8a14517ce2049d5c28fe5b9c5f96cde72a72f626fd2d98c25db182dbf35,PodSandboxId:f47975e69441165864ec545df67f1a329f576e97201277fbd05b09abbf9f9e66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733142675709071249,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703fc2cb89a57467474e4f73926664ed,},Annotation
s:map[string]string{io.kubernetes.container.hash: 41943a37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59ec59dd-0600-4a13-a9a2-95bada8960f9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.823874735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9452295-5381-41d5-99f1-856bc4ea9dcb name=/runtime.v1.RuntimeService/Version
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.823935635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9452295-5381-41d5-99f1-856bc4ea9dcb name=/runtime.v1.RuntimeService/Version
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.825041387Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c17695f7-50e8-463c-b334-a046b61add03 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.825484887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733142696825466442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c17695f7-50e8-463c-b334-a046b61add03 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.826017163Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6e17ed1-4d01-4132-ad10-91ce0dd3bfbc name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.826065866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6e17ed1-4d01-4132-ad10-91ce0dd3bfbc name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:31:36 test-preload-404682 crio[681]: time="2024-12-02 12:31:36.826203844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e5364583f01313c780eea2bfed594be1a07d636ce775e596c6b6d8a78d22944,PodSandboxId:882677737d3e2b86a2b7df99801a1bae58ab7ccfe42972c9ca96b1c78065cb96,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733142689512575637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-99ff8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd9bc41f-9a2f-43d9-9296-e5a1d03ff222,},Annotations:map[string]string{io.kubernetes.container.hash: 7484b187,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1587657a8101116e1acb57ab9b642dc89b02b9f29f181c24dae1ed3de93fa34b,PodSandboxId:c947954c442f4aa58b76e3a9e7f80ce0c5a683478ee90c70d502b431089dde1d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733142682671071635,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d2k9g,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b06e0b45-03ae-4ee1-9042-13504452ad66,},Annotations:map[string]string{io.kubernetes.container.hash: 7d828d1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b39ef8112e71fc5ee0e157ffd904b366e7ba8ba27d9d4d0c574267f21da2ad5,PodSandboxId:a6145ffa8fc121fa2a31ad21f38e9254d36b73a94743b27f88c324fedd5b37da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733142682353360512,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98
eefa9f-2810-4028-b81b-536da9ce44d0,},Annotations:map[string]string{io.kubernetes.container.hash: 44c660dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cdfedcd4747e3603e741b8e3e76670b6eaa09c15f03afe311757ec85278154b,PodSandboxId:2b00c678fcfd31c7b115411459dd0eea11978f90fd2e60334b3679cb2e759877,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733142675787805222,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 521a041db53d9ce9e53c091c91741320,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5536104bb57b979b6269bcd5b9f4641bf57ded18e63c80655ef401f1856c9a,PodSandboxId:711507ff939cfbdbb285e85486f23896e86435c328c162540b8ab5c6b6602b98,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733142675738110809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2324dcc611c9159a4851914ed54699ac,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a708ebc492fd87d5d5633e52317e79cd32dd7b3a8851e434036d1a5766db1adc,PodSandboxId:8bee0e3b8bdcb8a77ab2f47b7a3c3b4a9bcd070434c8894ab45b2c58015777f1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733142675771800371,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c523ea67621368be4a17027a371233e,}
,Annotations:map[string]string{io.kubernetes.container.hash: 455d72d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e68bf8a14517ce2049d5c28fe5b9c5f96cde72a72f626fd2d98c25db182dbf35,PodSandboxId:f47975e69441165864ec545df67f1a329f576e97201277fbd05b09abbf9f9e66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733142675709071249,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-404682,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 703fc2cb89a57467474e4f73926664ed,},Annotation
s:map[string]string{io.kubernetes.container.hash: 41943a37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6e17ed1-4d01-4132-ad10-91ce0dd3bfbc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e5364583f013       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   882677737d3e2       coredns-6d4b75cb6d-99ff8
	1587657a81011       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   c947954c442f4       kube-proxy-d2k9g
	7b39ef8112e71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   a6145ffa8fc12       storage-provisioner
	7cdfedcd4747e       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   2b00c678fcfd3       kube-controller-manager-test-preload-404682
	a708ebc492fd8       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   8bee0e3b8bdcb       etcd-test-preload-404682
	8a5536104bb57       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   711507ff939cf       kube-scheduler-test-preload-404682
	e68bf8a14517c       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   f47975e694411       kube-apiserver-test-preload-404682
	
	
	==> coredns [6e5364583f01313c780eea2bfed594be1a07d636ce775e596c6b6d8a78d22944] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:56795 - 62759 "HINFO IN 4706222732116049108.5468197067050469890. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011580126s
	
	
	==> describe nodes <==
	Name:               test-preload-404682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-404682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=test-preload-404682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T12_30_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 12:30:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-404682
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 12:31:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 12:31:30 +0000   Mon, 02 Dec 2024 12:30:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 12:31:30 +0000   Mon, 02 Dec 2024 12:30:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 12:31:30 +0000   Mon, 02 Dec 2024 12:30:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 12:31:30 +0000   Mon, 02 Dec 2024 12:31:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    test-preload-404682
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a52cdada37341e793c8fbd4119b4eda
	  System UUID:                3a52cdad-a373-41e7-93c8-fbd4119b4eda
	  Boot ID:                    e2ed92c3-0d3f-428d-a160-99b7984ee55a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-99ff8                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     77s
	  kube-system                 etcd-test-preload-404682                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         90s
	  kube-system                 kube-apiserver-test-preload-404682             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-test-preload-404682    200m (10%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-proxy-d2k9g                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-test-preload-404682             100m (5%)     0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node test-preload-404682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node test-preload-404682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s                kubelet          Node test-preload-404682 status is now: NodeHasSufficientPID
	  Normal  NodeReady                80s                kubelet          Node test-preload-404682 status is now: NodeReady
	  Normal  RegisteredNode           78s                node-controller  Node test-preload-404682 event: Registered Node test-preload-404682 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-404682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-404682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-404682 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-404682 event: Registered Node test-preload-404682 in Controller
	
	
	==> dmesg <==
	[Dec 2 12:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000039] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052567] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039973] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.887887] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.796291] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.391160] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.956740] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.057126] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059461] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.183805] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.105560] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.259035] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[Dec 2 12:31] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[  +0.062633] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.877660] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +6.963924] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.622117] systemd-fstab-generator[1771]: Ignoring "noauto" option for root device
	[  +5.958078] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [a708ebc492fd87d5d5633e52317e79cd32dd7b3a8851e434036d1a5766db1adc] <==
	{"level":"info","ts":"2024-12-02T12:31:16.144Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8d50a8842d8d7ae5","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-02T12:31:16.144Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-02T12:31:16.156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 switched to configuration voters=(10182824043138087653)"}
	{"level":"info","ts":"2024-12-02T12:31:16.156Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","added-peer-id":"8d50a8842d8d7ae5","added-peer-peer-urls":["https://192.168.39.206:2380"]}
	{"level":"info","ts":"2024-12-02T12:31:16.157Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b0723a440b02124","local-member-id":"8d50a8842d8d7ae5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:31:16.157Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:31:16.162Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2024-12-02T12:31:16.162Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.206:2380"}
	{"level":"info","ts":"2024-12-02T12:31:16.162Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-02T12:31:16.163Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-02T12:31:16.163Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8d50a8842d8d7ae5","initial-advertise-peer-urls":["https://192.168.39.206:2380"],"listen-peer-urls":["https://192.168.39.206:2380"],"advertise-client-urls":["https://192.168.39.206:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.206:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-02T12:31:18.004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-02T12:31:18.004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-02T12:31:18.004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgPreVoteResp from 8d50a8842d8d7ae5 at term 2"}
	{"level":"info","ts":"2024-12-02T12:31:18.004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became candidate at term 3"}
	{"level":"info","ts":"2024-12-02T12:31:18.004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 received MsgVoteResp from 8d50a8842d8d7ae5 at term 3"}
	{"level":"info","ts":"2024-12-02T12:31:18.004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d50a8842d8d7ae5 became leader at term 3"}
	{"level":"info","ts":"2024-12-02T12:31:18.004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d50a8842d8d7ae5 elected leader 8d50a8842d8d7ae5 at term 3"}
	{"level":"info","ts":"2024-12-02T12:31:18.004Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8d50a8842d8d7ae5","local-member-attributes":"{Name:test-preload-404682 ClientURLs:[https://192.168.39.206:2379]}","request-path":"/0/members/8d50a8842d8d7ae5/attributes","cluster-id":"b0723a440b02124","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-02T12:31:18.004Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T12:31:18.006Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-02T12:31:18.006Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T12:31:18.008Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.206:2379"}
	{"level":"info","ts":"2024-12-02T12:31:18.015Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-02T12:31:18.015Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:31:37 up 0 min,  0 users,  load average: 1.19, 0.34, 0.12
	Linux test-preload-404682 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e68bf8a14517ce2049d5c28fe5b9c5f96cde72a72f626fd2d98c25db182dbf35] <==
	I1202 12:31:20.402688       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1202 12:31:20.362787       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I1202 12:31:20.403126       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I1202 12:31:20.404215       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 12:31:20.412211       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1202 12:31:20.412244       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E1202 12:31:20.511311       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1202 12:31:20.551912       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1202 12:31:20.565517       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 12:31:20.572680       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1202 12:31:20.573195       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1202 12:31:20.574357       1 cache.go:39] Caches are synced for autoregister controller
	I1202 12:31:20.598026       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 12:31:20.603497       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1202 12:31:20.612761       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1202 12:31:21.063748       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1202 12:31:21.377456       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 12:31:22.177513       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1202 12:31:22.191691       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1202 12:31:22.219579       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1202 12:31:22.259056       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 12:31:22.270099       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 12:31:22.940665       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1202 12:31:32.903003       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1202 12:31:32.939289       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7cdfedcd4747e3603e741b8e3e76670b6eaa09c15f03afe311757ec85278154b] <==
	I1202 12:31:32.888863       1 shared_informer.go:262] Caches are synced for persistent volume
	I1202 12:31:32.889550       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1202 12:31:32.892145       1 shared_informer.go:262] Caches are synced for HPA
	I1202 12:31:32.893732       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1202 12:31:32.894809       1 shared_informer.go:262] Caches are synced for disruption
	I1202 12:31:32.894849       1 disruption.go:371] Sending events to api server.
	I1202 12:31:32.898502       1 shared_informer.go:262] Caches are synced for attach detach
	I1202 12:31:32.902674       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1202 12:31:32.905616       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1202 12:31:32.908948       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1202 12:31:32.913332       1 shared_informer.go:262] Caches are synced for expand
	I1202 12:31:32.926034       1 shared_informer.go:262] Caches are synced for namespace
	I1202 12:31:32.927211       1 shared_informer.go:262] Caches are synced for daemon sets
	I1202 12:31:32.929550       1 shared_informer.go:262] Caches are synced for endpoint
	I1202 12:31:32.964440       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1202 12:31:32.964480       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1202 12:31:32.964513       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1202 12:31:32.964582       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1202 12:31:32.982439       1 shared_informer.go:262] Caches are synced for stateful set
	I1202 12:31:33.041519       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1202 12:31:33.072045       1 shared_informer.go:262] Caches are synced for resource quota
	I1202 12:31:33.117078       1 shared_informer.go:262] Caches are synced for resource quota
	I1202 12:31:33.522116       1 shared_informer.go:262] Caches are synced for garbage collector
	I1202 12:31:33.522223       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1202 12:31:33.545305       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [1587657a8101116e1acb57ab9b642dc89b02b9f29f181c24dae1ed3de93fa34b] <==
	I1202 12:31:22.895726       1 node.go:163] Successfully retrieved node IP: 192.168.39.206
	I1202 12:31:22.895840       1 server_others.go:138] "Detected node IP" address="192.168.39.206"
	I1202 12:31:22.895888       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1202 12:31:22.929827       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1202 12:31:22.929841       1 server_others.go:206] "Using iptables Proxier"
	I1202 12:31:22.930020       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1202 12:31:22.930692       1 server.go:661] "Version info" version="v1.24.4"
	I1202 12:31:22.930705       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 12:31:22.933542       1 config.go:317] "Starting service config controller"
	I1202 12:31:22.933731       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1202 12:31:22.933791       1 config.go:226] "Starting endpoint slice config controller"
	I1202 12:31:22.933810       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1202 12:31:22.936215       1 config.go:444] "Starting node config controller"
	I1202 12:31:22.936246       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1202 12:31:23.034557       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1202 12:31:23.034586       1 shared_informer.go:262] Caches are synced for service config
	I1202 12:31:23.036493       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [8a5536104bb57b979b6269bcd5b9f4641bf57ded18e63c80655ef401f1856c9a] <==
	W1202 12:31:20.521621       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 12:31:20.521706       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1202 12:31:20.530638       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1202 12:31:20.530675       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1202 12:31:20.530737       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1202 12:31:20.530765       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1202 12:31:20.530815       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 12:31:20.530841       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1202 12:31:20.530889       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1202 12:31:20.530917       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1202 12:31:20.530997       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1202 12:31:20.531023       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1202 12:31:20.531126       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 12:31:20.531153       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1202 12:31:20.531214       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1202 12:31:20.531222       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1202 12:31:20.531243       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1202 12:31:20.531268       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1202 12:31:20.531300       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 12:31:20.531326       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1202 12:31:20.531350       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 12:31:20.531356       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1202 12:31:20.531487       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1202 12:31:20.531516       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1202 12:31:21.604365       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.161663    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b06e0b45-03ae-4ee1-9042-13504452ad66-kube-proxy\") pod \"kube-proxy-d2k9g\" (UID: \"b06e0b45-03ae-4ee1-9042-13504452ad66\") " pod="kube-system/kube-proxy-d2k9g"
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.161711    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b06e0b45-03ae-4ee1-9042-13504452ad66-lib-modules\") pod \"kube-proxy-d2k9g\" (UID: \"b06e0b45-03ae-4ee1-9042-13504452ad66\") " pod="kube-system/kube-proxy-d2k9g"
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.161760    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd9bc41f-9a2f-43d9-9296-e5a1d03ff222-config-volume\") pod \"coredns-6d4b75cb6d-99ff8\" (UID: \"fd9bc41f-9a2f-43d9-9296-e5a1d03ff222\") " pod="kube-system/coredns-6d4b75cb6d-99ff8"
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.161812    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rktjd\" (UniqueName: \"kubernetes.io/projected/98eefa9f-2810-4028-b81b-536da9ce44d0-kube-api-access-rktjd\") pod \"storage-provisioner\" (UID: \"98eefa9f-2810-4028-b81b-536da9ce44d0\") " pod="kube-system/storage-provisioner"
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.161858    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jq86\" (UniqueName: \"kubernetes.io/projected/b06e0b45-03ae-4ee1-9042-13504452ad66-kube-api-access-7jq86\") pod \"kube-proxy-d2k9g\" (UID: \"b06e0b45-03ae-4ee1-9042-13504452ad66\") " pod="kube-system/kube-proxy-d2k9g"
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.161918    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2gkf\" (UniqueName: \"kubernetes.io/projected/fd9bc41f-9a2f-43d9-9296-e5a1d03ff222-kube-api-access-r2gkf\") pod \"coredns-6d4b75cb6d-99ff8\" (UID: \"fd9bc41f-9a2f-43d9-9296-e5a1d03ff222\") " pod="kube-system/coredns-6d4b75cb6d-99ff8"
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.161968    1140 reconciler.go:159] "Reconciler: start to sync state"
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.518090    1140 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a900eb5-78dc-493b-a67f-26c6bc488076-config-volume\") pod \"8a900eb5-78dc-493b-a67f-26c6bc488076\" (UID: \"8a900eb5-78dc-493b-a67f-26c6bc488076\") "
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.518294    1140 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqmfc\" (UniqueName: \"kubernetes.io/projected/8a900eb5-78dc-493b-a67f-26c6bc488076-kube-api-access-xqmfc\") pod \"8a900eb5-78dc-493b-a67f-26c6bc488076\" (UID: \"8a900eb5-78dc-493b-a67f-26c6bc488076\") "
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: W1202 12:31:21.519215    1140 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/8a900eb5-78dc-493b-a67f-26c6bc488076/volumes/kubernetes.io~projected/kube-api-access-xqmfc: clearQuota called, but quotas disabled
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: E1202 12:31:21.519953    1140 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: E1202 12:31:21.520124    1140 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/fd9bc41f-9a2f-43d9-9296-e5a1d03ff222-config-volume podName:fd9bc41f-9a2f-43d9-9296-e5a1d03ff222 nodeName:}" failed. No retries permitted until 2024-12-02 12:31:22.020038198 +0000 UTC m=+7.158985024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd9bc41f-9a2f-43d9-9296-e5a1d03ff222-config-volume") pod "coredns-6d4b75cb6d-99ff8" (UID: "fd9bc41f-9a2f-43d9-9296-e5a1d03ff222") : object "kube-system"/"coredns" not registered
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.520507    1140 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a900eb5-78dc-493b-a67f-26c6bc488076-kube-api-access-xqmfc" (OuterVolumeSpecName: "kube-api-access-xqmfc") pod "8a900eb5-78dc-493b-a67f-26c6bc488076" (UID: "8a900eb5-78dc-493b-a67f-26c6bc488076"). InnerVolumeSpecName "kube-api-access-xqmfc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: W1202 12:31:21.520569    1140 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/8a900eb5-78dc-493b-a67f-26c6bc488076/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.521295    1140 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a900eb5-78dc-493b-a67f-26c6bc488076-config-volume" (OuterVolumeSpecName: "config-volume") pod "8a900eb5-78dc-493b-a67f-26c6bc488076" (UID: "8a900eb5-78dc-493b-a67f-26c6bc488076"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.618877    1140 reconciler.go:384] "Volume detached for volume \"kube-api-access-xqmfc\" (UniqueName: \"kubernetes.io/projected/8a900eb5-78dc-493b-a67f-26c6bc488076-kube-api-access-xqmfc\") on node \"test-preload-404682\" DevicePath \"\""
	Dec 02 12:31:21 test-preload-404682 kubelet[1140]: I1202 12:31:21.618901    1140 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a900eb5-78dc-493b-a67f-26c6bc488076-config-volume\") on node \"test-preload-404682\" DevicePath \"\""
	Dec 02 12:31:22 test-preload-404682 kubelet[1140]: E1202 12:31:22.021675    1140 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 02 12:31:22 test-preload-404682 kubelet[1140]: E1202 12:31:22.021749    1140 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/fd9bc41f-9a2f-43d9-9296-e5a1d03ff222-config-volume podName:fd9bc41f-9a2f-43d9-9296-e5a1d03ff222 nodeName:}" failed. No retries permitted until 2024-12-02 12:31:23.021734922 +0000 UTC m=+8.160681735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd9bc41f-9a2f-43d9-9296-e5a1d03ff222-config-volume") pod "coredns-6d4b75cb6d-99ff8" (UID: "fd9bc41f-9a2f-43d9-9296-e5a1d03ff222") : object "kube-system"/"coredns" not registered
	Dec 02 12:31:23 test-preload-404682 kubelet[1140]: E1202 12:31:23.028049    1140 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 02 12:31:23 test-preload-404682 kubelet[1140]: E1202 12:31:23.028130    1140 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/fd9bc41f-9a2f-43d9-9296-e5a1d03ff222-config-volume podName:fd9bc41f-9a2f-43d9-9296-e5a1d03ff222 nodeName:}" failed. No retries permitted until 2024-12-02 12:31:25.028115393 +0000 UTC m=+10.167062206 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd9bc41f-9a2f-43d9-9296-e5a1d03ff222-config-volume") pod "coredns-6d4b75cb6d-99ff8" (UID: "fd9bc41f-9a2f-43d9-9296-e5a1d03ff222") : object "kube-system"/"coredns" not registered
	Dec 02 12:31:23 test-preload-404682 kubelet[1140]: E1202 12:31:23.091756    1140 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-99ff8" podUID=fd9bc41f-9a2f-43d9-9296-e5a1d03ff222
	Dec 02 12:31:23 test-preload-404682 kubelet[1140]: I1202 12:31:23.097028    1140 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8a900eb5-78dc-493b-a67f-26c6bc488076 path="/var/lib/kubelet/pods/8a900eb5-78dc-493b-a67f-26c6bc488076/volumes"
	Dec 02 12:31:25 test-preload-404682 kubelet[1140]: E1202 12:31:25.041991    1140 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 02 12:31:25 test-preload-404682 kubelet[1140]: E1202 12:31:25.042170    1140 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/fd9bc41f-9a2f-43d9-9296-e5a1d03ff222-config-volume podName:fd9bc41f-9a2f-43d9-9296-e5a1d03ff222 nodeName:}" failed. No retries permitted until 2024-12-02 12:31:29.042144878 +0000 UTC m=+14.181091710 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd9bc41f-9a2f-43d9-9296-e5a1d03ff222-config-volume") pod "coredns-6d4b75cb6d-99ff8" (UID: "fd9bc41f-9a2f-43d9-9296-e5a1d03ff222") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [7b39ef8112e71fc5ee0e157ffd904b366e7ba8ba27d9d4d0c574267f21da2ad5] <==
	I1202 12:31:22.432318       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-404682 -n test-preload-404682
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-404682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-404682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-404682
--- FAIL: TestPreload (164.37s)

                                                
                                    
x
+
TestKubernetesUpgrade (399.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-127536 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1202 12:35:01.370253   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-127536 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m14.952872871s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-127536] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-127536" primary control-plane node in "kubernetes-upgrade-127536" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 12:34:55.677073   48354 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:34:55.677206   48354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:34:55.677217   48354 out.go:358] Setting ErrFile to fd 2...
	I1202 12:34:55.677221   48354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:34:55.677446   48354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:34:55.678024   48354 out.go:352] Setting JSON to false
	I1202 12:34:55.678974   48354 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4648,"bootTime":1733138248,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:34:55.679068   48354 start.go:139] virtualization: kvm guest
	I1202 12:34:55.681313   48354 out.go:177] * [kubernetes-upgrade-127536] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:34:55.682963   48354 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:34:55.682973   48354 notify.go:220] Checking for updates...
	I1202 12:34:55.685354   48354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:34:55.686469   48354 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:34:55.687624   48354 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:34:55.688732   48354 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:34:55.689857   48354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:34:55.691552   48354 config.go:182] Loaded profile config "NoKubernetes-405664": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:34:55.691706   48354 config.go:182] Loaded profile config "pause-198058": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:34:55.691816   48354 config.go:182] Loaded profile config "running-upgrade-449763": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1202 12:34:55.691929   48354 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:34:55.725990   48354 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 12:34:55.727102   48354 start.go:297] selected driver: kvm2
	I1202 12:34:55.727121   48354 start.go:901] validating driver "kvm2" against <nil>
	I1202 12:34:55.727135   48354 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:34:55.728159   48354 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:34:55.728270   48354 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 12:34:55.742915   48354 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 12:34:55.742965   48354 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 12:34:55.743264   48354 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 12:34:55.743309   48354 cni.go:84] Creating CNI manager for ""
	I1202 12:34:55.743373   48354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:34:55.743386   48354 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1202 12:34:55.743456   48354 start.go:340] cluster config:
	{Name:kubernetes-upgrade-127536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-127536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:34:55.743604   48354 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:34:55.745853   48354 out.go:177] * Starting "kubernetes-upgrade-127536" primary control-plane node in "kubernetes-upgrade-127536" cluster
	I1202 12:34:55.746848   48354 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1202 12:34:55.746912   48354 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1202 12:34:55.746927   48354 cache.go:56] Caching tarball of preloaded images
	I1202 12:34:55.747021   48354 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 12:34:55.747037   48354 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1202 12:34:55.747145   48354 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/config.json ...
	I1202 12:34:55.747169   48354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/config.json: {Name:mke539d7421d594f4cd094487b035ec5973d9aaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:34:55.747343   48354 start.go:360] acquireMachinesLock for kubernetes-upgrade-127536: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:35:40.773321   48354 start.go:364] duration metric: took 45.025918574s to acquireMachinesLock for "kubernetes-upgrade-127536"
	I1202 12:35:40.773392   48354 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-127536 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-127536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 12:35:40.773524   48354 start.go:125] createHost starting for "" (driver="kvm2")
	I1202 12:35:40.775420   48354 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 12:35:40.775661   48354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:35:40.775736   48354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:35:40.792624   48354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40595
	I1202 12:35:40.793071   48354 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:35:40.793682   48354 main.go:141] libmachine: Using API Version  1
	I1202 12:35:40.793708   48354 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:35:40.794320   48354 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:35:40.794650   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetMachineName
	I1202 12:35:40.794825   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:35:40.794994   48354 start.go:159] libmachine.API.Create for "kubernetes-upgrade-127536" (driver="kvm2")
	I1202 12:35:40.795057   48354 client.go:168] LocalClient.Create starting
	I1202 12:35:40.795092   48354 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 12:35:40.795127   48354 main.go:141] libmachine: Decoding PEM data...
	I1202 12:35:40.795154   48354 main.go:141] libmachine: Parsing certificate...
	I1202 12:35:40.795225   48354 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 12:35:40.795246   48354 main.go:141] libmachine: Decoding PEM data...
	I1202 12:35:40.795256   48354 main.go:141] libmachine: Parsing certificate...
	I1202 12:35:40.795270   48354 main.go:141] libmachine: Running pre-create checks...
	I1202 12:35:40.795280   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .PreCreateCheck
	I1202 12:35:40.795721   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetConfigRaw
	I1202 12:35:40.796123   48354 main.go:141] libmachine: Creating machine...
	I1202 12:35:40.796139   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .Create
	I1202 12:35:40.796285   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Creating KVM machine...
	I1202 12:35:40.797448   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found existing default KVM network
	I1202 12:35:40.798804   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:40.798658   48885 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:18:d0:9f} reservation:<nil>}
	I1202 12:35:40.799859   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:40.799760   48885 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:4b:7a:5a} reservation:<nil>}
	I1202 12:35:40.800659   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:40.800582   48885 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:26:76:49} reservation:<nil>}
	I1202 12:35:40.801754   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:40.801658   48885 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000422f30}
	I1202 12:35:40.801787   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | created network xml: 
	I1202 12:35:40.801800   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | <network>
	I1202 12:35:40.801811   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG |   <name>mk-kubernetes-upgrade-127536</name>
	I1202 12:35:40.801822   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG |   <dns enable='no'/>
	I1202 12:35:40.801836   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG |   
	I1202 12:35:40.801848   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1202 12:35:40.801866   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG |     <dhcp>
	I1202 12:35:40.801877   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1202 12:35:40.801886   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG |     </dhcp>
	I1202 12:35:40.801896   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG |   </ip>
	I1202 12:35:40.801904   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG |   
	I1202 12:35:40.801918   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | </network>
	I1202 12:35:40.801926   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | 
	I1202 12:35:40.807829   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | trying to create private KVM network mk-kubernetes-upgrade-127536 192.168.72.0/24...
	I1202 12:35:40.883807   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | private KVM network mk-kubernetes-upgrade-127536 192.168.72.0/24 created
	I1202 12:35:40.883869   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536 ...
	I1202 12:35:40.883893   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:40.883754   48885 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:35:40.883920   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 12:35:40.883951   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 12:35:41.127696   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:41.127582   48885 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/id_rsa...
	I1202 12:35:41.295601   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:41.295454   48885 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/kubernetes-upgrade-127536.rawdisk...
	I1202 12:35:41.295633   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Writing magic tar header
	I1202 12:35:41.295650   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Writing SSH key tar header
	I1202 12:35:41.295826   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:41.295733   48885 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536 ...
	I1202 12:35:41.295865   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536
	I1202 12:35:41.295891   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 12:35:41.295906   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536 (perms=drwx------)
	I1202 12:35:41.295926   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 12:35:41.295939   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 12:35:41.295965   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 12:35:41.295979   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 12:35:41.295990   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:35:41.296005   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 12:35:41.296016   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 12:35:41.296030   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Checking permissions on dir: /home/jenkins
	I1202 12:35:41.296042   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Checking permissions on dir: /home
	I1202 12:35:41.296122   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 12:35:41.296156   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Creating domain...
	I1202 12:35:41.296165   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Skipping /home - not owner
	I1202 12:35:41.297416   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) define libvirt domain using xml: 
	I1202 12:35:41.297433   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) <domain type='kvm'>
	I1202 12:35:41.297443   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)   <name>kubernetes-upgrade-127536</name>
	I1202 12:35:41.297451   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)   <memory unit='MiB'>2200</memory>
	I1202 12:35:41.297462   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)   <vcpu>2</vcpu>
	I1202 12:35:41.297469   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)   <features>
	I1202 12:35:41.297490   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <acpi/>
	I1202 12:35:41.297496   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <apic/>
	I1202 12:35:41.297504   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <pae/>
	I1202 12:35:41.297516   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     
	I1202 12:35:41.297554   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)   </features>
	I1202 12:35:41.297569   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)   <cpu mode='host-passthrough'>
	I1202 12:35:41.297578   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)   
	I1202 12:35:41.297586   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)   </cpu>
	I1202 12:35:41.297594   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)   <os>
	I1202 12:35:41.297601   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <type>hvm</type>
	I1202 12:35:41.297609   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <boot dev='cdrom'/>
	I1202 12:35:41.297617   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <boot dev='hd'/>
	I1202 12:35:41.297638   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <bootmenu enable='no'/>
	I1202 12:35:41.297646   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)   </os>
	I1202 12:35:41.297657   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)   <devices>
	I1202 12:35:41.297667   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <disk type='file' device='cdrom'>
	I1202 12:35:41.297689   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/boot2docker.iso'/>
	I1202 12:35:41.297700   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <target dev='hdc' bus='scsi'/>
	I1202 12:35:41.297711   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <readonly/>
	I1202 12:35:41.297733   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     </disk>
	I1202 12:35:41.297745   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <disk type='file' device='disk'>
	I1202 12:35:41.297756   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 12:35:41.297800   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/kubernetes-upgrade-127536.rawdisk'/>
	I1202 12:35:41.297822   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <target dev='hda' bus='virtio'/>
	I1202 12:35:41.297833   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     </disk>
	I1202 12:35:41.297842   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <interface type='network'>
	I1202 12:35:41.297854   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <source network='mk-kubernetes-upgrade-127536'/>
	I1202 12:35:41.297860   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <model type='virtio'/>
	I1202 12:35:41.297868   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     </interface>
	I1202 12:35:41.297874   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <interface type='network'>
	I1202 12:35:41.297904   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <source network='default'/>
	I1202 12:35:41.297921   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <model type='virtio'/>
	I1202 12:35:41.297931   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     </interface>
	I1202 12:35:41.297938   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <serial type='pty'>
	I1202 12:35:41.297948   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <target port='0'/>
	I1202 12:35:41.297954   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     </serial>
	I1202 12:35:41.297962   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <console type='pty'>
	I1202 12:35:41.297970   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <target type='serial' port='0'/>
	I1202 12:35:41.297977   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     </console>
	I1202 12:35:41.297983   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     <rng model='virtio'>
	I1202 12:35:41.297992   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)       <backend model='random'>/dev/random</backend>
	I1202 12:35:41.297998   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     </rng>
	I1202 12:35:41.298006   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     
	I1202 12:35:41.298011   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)     
	I1202 12:35:41.298019   48354 main.go:141] libmachine: (kubernetes-upgrade-127536)   </devices>
	I1202 12:35:41.298025   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) </domain>
	I1202 12:35:41.298034   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) 
	I1202 12:35:41.304960   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:8f:87:56 in network default
	I1202 12:35:41.305619   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Ensuring networks are active...
	I1202 12:35:41.305646   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:41.306419   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Ensuring network default is active
	I1202 12:35:41.306770   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Ensuring network mk-kubernetes-upgrade-127536 is active
	I1202 12:35:41.307391   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Getting domain xml...
	I1202 12:35:41.308179   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Creating domain...
	I1202 12:35:42.712494   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Waiting to get IP...
	I1202 12:35:42.713406   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:42.714028   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:42.714085   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:42.714006   48885 retry.go:31] will retry after 264.429834ms: waiting for machine to come up
	I1202 12:35:42.980642   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:42.981239   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:42.981262   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:42.981156   48885 retry.go:31] will retry after 346.415423ms: waiting for machine to come up
	I1202 12:35:43.329812   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:43.330339   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:43.330350   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:43.330287   48885 retry.go:31] will retry after 375.433998ms: waiting for machine to come up
	I1202 12:35:43.708044   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:43.708654   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:43.708687   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:43.708565   48885 retry.go:31] will retry after 576.420264ms: waiting for machine to come up
	I1202 12:35:44.287174   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:44.287688   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:44.287716   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:44.287645   48885 retry.go:31] will retry after 736.66184ms: waiting for machine to come up
	I1202 12:35:45.025445   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:45.026019   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:45.026076   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:45.025915   48885 retry.go:31] will retry after 887.62348ms: waiting for machine to come up
	I1202 12:35:45.915221   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:45.915752   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:45.915786   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:45.915695   48885 retry.go:31] will retry after 1.024831602s: waiting for machine to come up
	I1202 12:35:46.941768   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:46.942344   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:46.942373   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:46.942271   48885 retry.go:31] will retry after 1.263618131s: waiting for machine to come up
	I1202 12:35:48.207276   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:48.207830   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:48.207859   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:48.207777   48885 retry.go:31] will retry after 1.286017508s: waiting for machine to come up
	I1202 12:35:49.494989   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:49.495429   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:49.495460   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:49.495378   48885 retry.go:31] will retry after 1.954346776s: waiting for machine to come up
	I1202 12:35:51.451431   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:51.451939   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:51.451962   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:51.451906   48885 retry.go:31] will retry after 1.812559555s: waiting for machine to come up
	I1202 12:35:53.265690   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:53.266265   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:53.266291   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:53.266213   48885 retry.go:31] will retry after 2.953590867s: waiting for machine to come up
	I1202 12:35:56.221062   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:56.221427   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:56.221455   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:56.221372   48885 retry.go:31] will retry after 2.910509534s: waiting for machine to come up
	I1202 12:35:59.134060   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:35:59.134566   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find current IP address of domain kubernetes-upgrade-127536 in network mk-kubernetes-upgrade-127536
	I1202 12:35:59.134589   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | I1202 12:35:59.134514   48885 retry.go:31] will retry after 4.758429198s: waiting for machine to come up
	I1202 12:36:03.896696   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:03.897201   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has current primary IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:03.897223   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Found IP for machine: 192.168.72.153
	I1202 12:36:03.897236   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Reserving static IP address...
	I1202 12:36:03.897521   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-127536", mac: "52:54:00:b3:a8:26", ip: "192.168.72.153"} in network mk-kubernetes-upgrade-127536
	I1202 12:36:03.967898   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Reserved static IP address: 192.168.72.153
	I1202 12:36:03.967929   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Getting to WaitForSSH function...
	I1202 12:36:03.967939   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Waiting for SSH to be available...
	I1202 12:36:03.970712   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:03.971310   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:03.971340   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:03.971358   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Using SSH client type: external
	I1202 12:36:03.971374   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/id_rsa (-rw-------)
	I1202 12:36:03.971407   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.153 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 12:36:03.971421   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | About to run SSH command:
	I1202 12:36:03.971438   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | exit 0
	I1202 12:36:04.109718   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | SSH cmd err, output: <nil>: 
	I1202 12:36:04.110046   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) KVM machine creation complete!
	I1202 12:36:04.110429   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetConfigRaw
	I1202 12:36:04.111049   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:36:04.111265   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:36:04.111455   48354 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 12:36:04.111474   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetState
	I1202 12:36:04.112771   48354 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 12:36:04.112788   48354 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 12:36:04.112795   48354 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 12:36:04.112803   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:36:04.115024   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.115441   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:04.115471   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.115625   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:36:04.115803   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:04.115946   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:04.116064   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:36:04.116223   48354 main.go:141] libmachine: Using SSH client type: native
	I1202 12:36:04.116484   48354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I1202 12:36:04.116502   48354 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 12:36:04.232029   48354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:36:04.232055   48354 main.go:141] libmachine: Detecting the provisioner...
	I1202 12:36:04.232065   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:36:04.234923   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.235224   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:04.235250   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.235521   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:36:04.235699   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:04.235859   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:04.236028   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:36:04.236269   48354 main.go:141] libmachine: Using SSH client type: native
	I1202 12:36:04.236488   48354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I1202 12:36:04.236505   48354 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 12:36:04.357123   48354 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 12:36:04.357207   48354 main.go:141] libmachine: found compatible host: buildroot
	I1202 12:36:04.357219   48354 main.go:141] libmachine: Provisioning with buildroot...
	I1202 12:36:04.357230   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetMachineName
	I1202 12:36:04.357506   48354 buildroot.go:166] provisioning hostname "kubernetes-upgrade-127536"
	I1202 12:36:04.357554   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetMachineName
	I1202 12:36:04.357758   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:36:04.361027   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.361434   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:04.361460   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.361656   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:36:04.361828   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:04.361993   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:04.362178   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:36:04.362361   48354 main.go:141] libmachine: Using SSH client type: native
	I1202 12:36:04.362571   48354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I1202 12:36:04.362590   48354 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-127536 && echo "kubernetes-upgrade-127536" | sudo tee /etc/hostname
	I1202 12:36:04.495767   48354 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-127536
	
	I1202 12:36:04.495793   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:36:04.499046   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.499460   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:04.499488   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.499657   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:36:04.499854   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:04.500019   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:04.500172   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:36:04.500370   48354 main.go:141] libmachine: Using SSH client type: native
	I1202 12:36:04.500572   48354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I1202 12:36:04.500596   48354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-127536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-127536/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-127536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 12:36:04.629664   48354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:36:04.629692   48354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 12:36:04.629723   48354 buildroot.go:174] setting up certificates
	I1202 12:36:04.629732   48354 provision.go:84] configureAuth start
	I1202 12:36:04.629748   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetMachineName
	I1202 12:36:04.629991   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetIP
	I1202 12:36:04.633078   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.633518   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:04.633547   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.633640   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:36:04.636010   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.636387   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:04.636411   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.636567   48354 provision.go:143] copyHostCerts
	I1202 12:36:04.636631   48354 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 12:36:04.636646   48354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:36:04.636710   48354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 12:36:04.636833   48354 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 12:36:04.636845   48354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:36:04.636872   48354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 12:36:04.636958   48354 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 12:36:04.636968   48354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:36:04.636994   48354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 12:36:04.637092   48354 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-127536 san=[127.0.0.1 192.168.72.153 kubernetes-upgrade-127536 localhost minikube]
	I1202 12:36:04.753512   48354 provision.go:177] copyRemoteCerts
	I1202 12:36:04.753591   48354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 12:36:04.753620   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:36:04.756327   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.756661   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:04.756690   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.756839   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:36:04.756982   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:04.757105   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:36:04.757225   48354 sshutil.go:53] new ssh client: &{IP:192.168.72.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/id_rsa Username:docker}
	I1202 12:36:04.848031   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 12:36:04.874249   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1202 12:36:04.899717   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 12:36:04.924924   48354 provision.go:87] duration metric: took 295.178827ms to configureAuth
	I1202 12:36:04.924956   48354 buildroot.go:189] setting minikube options for container-runtime
	I1202 12:36:04.925114   48354 config.go:182] Loaded profile config "kubernetes-upgrade-127536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1202 12:36:04.925186   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:36:04.927914   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.928351   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:04.928382   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:04.928644   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:36:04.928819   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:04.928998   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:04.929132   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:36:04.929301   48354 main.go:141] libmachine: Using SSH client type: native
	I1202 12:36:04.929497   48354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I1202 12:36:04.929523   48354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 12:36:05.176389   48354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 12:36:05.176420   48354 main.go:141] libmachine: Checking connection to Docker...
	I1202 12:36:05.176433   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetURL
	I1202 12:36:05.177747   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | Using libvirt version 6000000
	I1202 12:36:05.180138   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.180595   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:05.180619   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.180832   48354 main.go:141] libmachine: Docker is up and running!
	I1202 12:36:05.180845   48354 main.go:141] libmachine: Reticulating splines...
	I1202 12:36:05.180852   48354 client.go:171] duration metric: took 24.385784328s to LocalClient.Create
	I1202 12:36:05.180880   48354 start.go:167] duration metric: took 24.385885871s to libmachine.API.Create "kubernetes-upgrade-127536"
	I1202 12:36:05.180895   48354 start.go:293] postStartSetup for "kubernetes-upgrade-127536" (driver="kvm2")
	I1202 12:36:05.180908   48354 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 12:36:05.180932   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:36:05.181209   48354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 12:36:05.181274   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:36:05.184053   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.184389   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:05.184426   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.184578   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:36:05.184745   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:05.184911   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:36:05.185082   48354 sshutil.go:53] new ssh client: &{IP:192.168.72.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/id_rsa Username:docker}
	I1202 12:36:05.274966   48354 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 12:36:05.280495   48354 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 12:36:05.280517   48354 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 12:36:05.280584   48354 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 12:36:05.280679   48354 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 12:36:05.280800   48354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 12:36:05.294331   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:36:05.319945   48354 start.go:296] duration metric: took 139.034756ms for postStartSetup
	I1202 12:36:05.319991   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetConfigRaw
	I1202 12:36:05.320635   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetIP
	I1202 12:36:05.323557   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.323952   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:05.323975   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.324259   48354 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/config.json ...
	I1202 12:36:05.324540   48354 start.go:128] duration metric: took 24.551001652s to createHost
	I1202 12:36:05.324573   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:36:05.326733   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.327038   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:05.327069   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.327271   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:36:05.327453   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:05.327611   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:05.327788   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:36:05.327963   48354 main.go:141] libmachine: Using SSH client type: native
	I1202 12:36:05.328159   48354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I1202 12:36:05.328170   48354 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 12:36:05.445201   48354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733142965.395447377
	
	I1202 12:36:05.445230   48354 fix.go:216] guest clock: 1733142965.395447377
	I1202 12:36:05.445241   48354 fix.go:229] Guest: 2024-12-02 12:36:05.395447377 +0000 UTC Remote: 2024-12-02 12:36:05.324556749 +0000 UTC m=+69.690489046 (delta=70.890628ms)
	I1202 12:36:05.445266   48354 fix.go:200] guest clock delta is within tolerance: 70.890628ms
	I1202 12:36:05.445273   48354 start.go:83] releasing machines lock for "kubernetes-upgrade-127536", held for 24.671914844s
	I1202 12:36:05.445303   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:36:05.445564   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetIP
	I1202 12:36:05.448301   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.448699   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:05.448727   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.448862   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:36:05.449357   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:36:05.449502   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:36:05.449598   48354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 12:36:05.449639   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:36:05.449710   48354 ssh_runner.go:195] Run: cat /version.json
	I1202 12:36:05.449748   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:36:05.452421   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.452610   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.452733   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:05.452757   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.452909   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:36:05.453038   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:05.453065   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:05.453072   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:05.453234   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:36:05.453307   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:36:05.453374   48354 sshutil.go:53] new ssh client: &{IP:192.168.72.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/id_rsa Username:docker}
	I1202 12:36:05.453470   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:36:05.453580   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:36:05.453717   48354 sshutil.go:53] new ssh client: &{IP:192.168.72.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/id_rsa Username:docker}
	I1202 12:36:05.571501   48354 ssh_runner.go:195] Run: systemctl --version
	I1202 12:36:05.577556   48354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 12:36:05.740932   48354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 12:36:05.748654   48354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 12:36:05.748711   48354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 12:36:05.768729   48354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 12:36:05.768755   48354 start.go:495] detecting cgroup driver to use...
	I1202 12:36:05.768800   48354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 12:36:05.788725   48354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 12:36:05.804301   48354 docker.go:217] disabling cri-docker service (if available) ...
	I1202 12:36:05.804353   48354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 12:36:05.819628   48354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 12:36:05.834795   48354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 12:36:05.963906   48354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 12:36:06.122279   48354 docker.go:233] disabling docker service ...
	I1202 12:36:06.122342   48354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 12:36:06.137779   48354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 12:36:06.151505   48354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 12:36:06.299851   48354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 12:36:06.424001   48354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 12:36:06.442974   48354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 12:36:06.461883   48354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1202 12:36:06.461954   48354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:36:06.472296   48354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 12:36:06.472359   48354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:36:06.482468   48354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:36:06.492145   48354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:36:06.502056   48354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 12:36:06.512680   48354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 12:36:06.521449   48354 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 12:36:06.521496   48354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 12:36:06.535353   48354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 12:36:06.544932   48354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:36:06.664225   48354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 12:36:06.749510   48354 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 12:36:06.749589   48354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 12:36:06.754882   48354 start.go:563] Will wait 60s for crictl version
	I1202 12:36:06.754932   48354 ssh_runner.go:195] Run: which crictl
	I1202 12:36:06.758635   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 12:36:06.800305   48354 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 12:36:06.800363   48354 ssh_runner.go:195] Run: crio --version
	I1202 12:36:06.828031   48354 ssh_runner.go:195] Run: crio --version
	I1202 12:36:06.857493   48354 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1202 12:36:06.858800   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetIP
	I1202 12:36:06.861623   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:06.862002   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:35:57 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:36:06.862025   48354 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:36:06.862206   48354 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1202 12:36:06.866129   48354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:36:06.878907   48354 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-127536 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-127536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.153 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 12:36:06.879005   48354 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1202 12:36:06.879044   48354 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:36:06.912251   48354 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1202 12:36:06.912320   48354 ssh_runner.go:195] Run: which lz4
	I1202 12:36:06.916331   48354 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 12:36:06.920470   48354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 12:36:06.920494   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1202 12:36:08.549985   48354 crio.go:462] duration metric: took 1.633674586s to copy over tarball
	I1202 12:36:08.550047   48354 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 12:36:11.151702   48354 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.601630717s)
	I1202 12:36:11.151729   48354 crio.go:469] duration metric: took 2.601719853s to extract the tarball
	I1202 12:36:11.151737   48354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 12:36:11.194706   48354 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:36:11.244504   48354 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1202 12:36:11.244527   48354 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 12:36:11.244637   48354 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:36:11.244662   48354 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1202 12:36:11.244602   48354 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:36:11.244696   48354 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1202 12:36:11.244708   48354 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:36:11.244594   48354 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:36:11.244610   48354 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:36:11.244739   48354 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1202 12:36:11.246355   48354 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1202 12:36:11.246407   48354 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:36:11.246424   48354 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:36:11.246434   48354 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:36:11.246354   48354 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1202 12:36:11.246363   48354 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1202 12:36:11.246354   48354 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:36:11.246781   48354 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:36:11.419016   48354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1202 12:36:11.423067   48354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:36:11.424336   48354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:36:11.433666   48354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:36:11.447304   48354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1202 12:36:11.460010   48354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1202 12:36:11.471969   48354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:36:11.560896   48354 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1202 12:36:11.560938   48354 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1202 12:36:11.560985   48354 ssh_runner.go:195] Run: which crictl
	I1202 12:36:11.587066   48354 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1202 12:36:11.587117   48354 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:36:11.587168   48354 ssh_runner.go:195] Run: which crictl
	I1202 12:36:11.617158   48354 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1202 12:36:11.617202   48354 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:36:11.617253   48354 ssh_runner.go:195] Run: which crictl
	I1202 12:36:11.637571   48354 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1202 12:36:11.637618   48354 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1202 12:36:11.637654   48354 ssh_runner.go:195] Run: which crictl
	I1202 12:36:11.637733   48354 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1202 12:36:11.637769   48354 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:36:11.637810   48354 ssh_runner.go:195] Run: which crictl
	I1202 12:36:11.658184   48354 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1202 12:36:11.658265   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:36:11.658276   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:36:11.658289   48354 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:36:11.658318   48354 ssh_runner.go:195] Run: which crictl
	I1202 12:36:11.658358   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1202 12:36:11.658227   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1202 12:36:11.658379   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:36:11.658452   48354 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1202 12:36:11.658479   48354 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1202 12:36:11.658507   48354 ssh_runner.go:195] Run: which crictl
	I1202 12:36:11.734893   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:36:11.811421   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:36:11.817920   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1202 12:36:11.818073   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1202 12:36:11.818155   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:36:11.818253   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:36:11.818323   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1202 12:36:11.818421   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:36:11.923623   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:36:12.002826   48354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1202 12:36:12.002940   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1202 12:36:12.003137   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1202 12:36:12.003218   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1202 12:36:12.003228   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:36:12.003259   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:36:12.035119   48354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1202 12:36:12.106833   48354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1202 12:36:12.130883   48354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1202 12:36:12.130890   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1202 12:36:12.130942   48354 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:36:12.130950   48354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1202 12:36:12.185199   48354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1202 12:36:12.185312   48354 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1202 12:36:12.247940   48354 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:36:12.398701   48354 cache_images.go:92] duration metric: took 1.15415184s to LoadCachedImages
	W1202 12:36:12.398824   48354 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1202 12:36:12.398845   48354 kubeadm.go:934] updating node { 192.168.72.153 8443 v1.20.0 crio true true} ...
	I1202 12:36:12.398967   48354 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-127536 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-127536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 12:36:12.399046   48354 ssh_runner.go:195] Run: crio config
	I1202 12:36:12.452213   48354 cni.go:84] Creating CNI manager for ""
	I1202 12:36:12.452245   48354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:36:12.452265   48354 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 12:36:12.452291   48354 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.153 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-127536 NodeName:kubernetes-upgrade-127536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1202 12:36:12.452462   48354 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-127536"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.153
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.153"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 12:36:12.452532   48354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1202 12:36:12.463489   48354 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 12:36:12.463549   48354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 12:36:12.475559   48354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1202 12:36:12.496052   48354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 12:36:12.513213   48354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1202 12:36:12.530081   48354 ssh_runner.go:195] Run: grep 192.168.72.153	control-plane.minikube.internal$ /etc/hosts
	I1202 12:36:12.534270   48354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.153	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:36:12.546475   48354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:36:12.665416   48354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:36:12.682504   48354 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536 for IP: 192.168.72.153
	I1202 12:36:12.682525   48354 certs.go:194] generating shared ca certs ...
	I1202 12:36:12.682545   48354 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:36:12.682714   48354 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 12:36:12.682755   48354 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 12:36:12.682764   48354 certs.go:256] generating profile certs ...
	I1202 12:36:12.682814   48354 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/client.key
	I1202 12:36:12.682836   48354 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/client.crt with IP's: []
	I1202 12:36:13.263605   48354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/client.crt ...
	I1202 12:36:13.263630   48354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/client.crt: {Name:mkbcc0685f7a89dbf314bcc0a55a344446cbbbd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:36:13.306437   48354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/client.key ...
	I1202 12:36:13.306469   48354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/client.key: {Name:mkf8c2e446004452c1191cc0235f1e6e5e0745d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:36:13.306636   48354 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.key.2834edd1
	I1202 12:36:13.306659   48354 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.crt.2834edd1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.153]
	I1202 12:36:13.477550   48354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.crt.2834edd1 ...
	I1202 12:36:13.477585   48354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.crt.2834edd1: {Name:mk3029214262f5c05676cc4a6f0a38b135a76625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:36:13.477753   48354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.key.2834edd1 ...
	I1202 12:36:13.477771   48354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.key.2834edd1: {Name:mk1a482e7b3a98cc8cad0f4b5ffb508fa0ee85e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:36:13.477870   48354 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.crt.2834edd1 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.crt
	I1202 12:36:13.477963   48354 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.key.2834edd1 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.key
	I1202 12:36:13.478039   48354 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/proxy-client.key
	I1202 12:36:13.478063   48354 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/proxy-client.crt with IP's: []
	I1202 12:36:13.745591   48354 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/proxy-client.crt ...
	I1202 12:36:13.745618   48354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/proxy-client.crt: {Name:mkaacac181659a373b0e553116ae40ca1ab673ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:36:13.745787   48354 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/proxy-client.key ...
	I1202 12:36:13.745802   48354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/proxy-client.key: {Name:mk9143bab11bef46f42e79280aad54cbdf305824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:36:13.745992   48354 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 12:36:13.746029   48354 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 12:36:13.746039   48354 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 12:36:13.746059   48354 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 12:36:13.746080   48354 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 12:36:13.746101   48354 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 12:36:13.746142   48354 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:36:13.746733   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 12:36:13.781160   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 12:36:13.806927   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 12:36:13.834984   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 12:36:13.860537   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1202 12:36:13.886164   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 12:36:13.914232   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 12:36:13.939980   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 12:36:13.993791   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 12:36:14.019952   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 12:36:14.045275   48354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 12:36:14.075242   48354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 12:36:14.094124   48354 ssh_runner.go:195] Run: openssl version
	I1202 12:36:14.099958   48354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 12:36:14.112589   48354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 12:36:14.117918   48354 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:36:14.117968   48354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 12:36:14.125571   48354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 12:36:14.138614   48354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 12:36:14.150474   48354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:36:14.155270   48354 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:36:14.155316   48354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:36:14.162816   48354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 12:36:14.176783   48354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 12:36:14.189775   48354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 12:36:14.194860   48354 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:36:14.194902   48354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 12:36:14.201253   48354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 12:36:14.213717   48354 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:36:14.218420   48354 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 12:36:14.218480   48354 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-127536 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-127536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.153 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:36:14.218552   48354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 12:36:14.218608   48354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:36:14.267120   48354 cri.go:89] found id: ""
	I1202 12:36:14.267190   48354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 12:36:14.277866   48354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:36:14.288057   48354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:36:14.297919   48354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:36:14.297935   48354 kubeadm.go:157] found existing configuration files:
	
	I1202 12:36:14.297966   48354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:36:14.309305   48354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:36:14.309360   48354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:36:14.321000   48354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:36:14.334062   48354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:36:14.334118   48354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:36:14.345425   48354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:36:14.359266   48354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:36:14.359320   48354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:36:14.372566   48354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:36:14.390808   48354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:36:14.390881   48354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:36:14.411979   48354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:36:14.578086   48354 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:36:14.578174   48354 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:36:14.749445   48354 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:36:14.749589   48354 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:36:14.749751   48354 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:36:14.972842   48354 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:36:14.975537   48354 out.go:235]   - Generating certificates and keys ...
	I1202 12:36:14.975639   48354 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:36:14.975725   48354 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:36:15.125950   48354 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 12:36:15.230092   48354 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 12:36:15.520961   48354 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 12:36:15.645102   48354 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 12:36:15.934866   48354 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 12:36:15.935251   48354 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-127536 localhost] and IPs [192.168.72.153 127.0.0.1 ::1]
	I1202 12:36:16.142458   48354 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 12:36:16.142695   48354 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-127536 localhost] and IPs [192.168.72.153 127.0.0.1 ::1]
	I1202 12:36:16.568763   48354 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 12:36:16.746610   48354 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 12:36:17.168645   48354 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 12:36:17.168903   48354 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:36:17.305286   48354 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:36:17.624585   48354 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:36:17.849354   48354 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:36:17.976084   48354 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:36:17.993175   48354 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:36:17.994276   48354 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:36:17.994345   48354 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:36:18.131566   48354 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:36:18.133330   48354 out.go:235]   - Booting up control plane ...
	I1202 12:36:18.133462   48354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:36:18.141716   48354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:36:18.142753   48354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:36:18.143604   48354 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:36:18.147220   48354 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:36:58.100744   48354 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:36:58.101564   48354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:36:58.101751   48354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:37:03.101344   48354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:37:03.101522   48354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:37:13.100979   48354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:37:13.101274   48354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:37:33.101415   48354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:37:33.101686   48354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:38:13.102930   48354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:38:13.103221   48354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:38:13.103241   48354 kubeadm.go:310] 
	I1202 12:38:13.103291   48354 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:38:13.103367   48354 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:38:13.103393   48354 kubeadm.go:310] 
	I1202 12:38:13.103444   48354 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:38:13.103487   48354 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:38:13.103639   48354 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:38:13.103661   48354 kubeadm.go:310] 
	I1202 12:38:13.103804   48354 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:38:13.103865   48354 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:38:13.103909   48354 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:38:13.103935   48354 kubeadm.go:310] 
	I1202 12:38:13.104091   48354 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:38:13.104226   48354 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:38:13.104261   48354 kubeadm.go:310] 
	I1202 12:38:13.104382   48354 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:38:13.104504   48354 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:38:13.104612   48354 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:38:13.104721   48354 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:38:13.104731   48354 kubeadm.go:310] 
	I1202 12:38:13.106124   48354 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:38:13.106251   48354 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:38:13.106359   48354 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1202 12:38:13.106501   48354 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-127536 localhost] and IPs [192.168.72.153 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-127536 localhost] and IPs [192.168.72.153 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-127536 localhost] and IPs [192.168.72.153 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-127536 localhost] and IPs [192.168.72.153 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 12:38:13.106544   48354 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:38:13.755245   48354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:38:13.769466   48354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:38:13.779222   48354 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:38:13.779241   48354 kubeadm.go:157] found existing configuration files:
	
	I1202 12:38:13.779277   48354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:38:13.788888   48354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:38:13.788929   48354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:38:13.798691   48354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:38:13.810699   48354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:38:13.810740   48354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:38:13.820314   48354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:38:13.831108   48354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:38:13.831148   48354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:38:13.844282   48354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:38:13.856416   48354 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:38:13.856466   48354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:38:13.868703   48354 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:38:13.951265   48354 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:38:13.951319   48354 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:38:14.093131   48354 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:38:14.093263   48354 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:38:14.093398   48354 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:38:14.286629   48354 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:38:14.288457   48354 out.go:235]   - Generating certificates and keys ...
	I1202 12:38:14.288544   48354 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:38:14.288641   48354 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:38:14.288764   48354 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:38:14.288851   48354 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:38:14.288948   48354 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:38:14.289045   48354 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:38:14.289140   48354 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:38:14.289243   48354 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:38:14.289361   48354 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:38:14.289485   48354 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:38:14.289550   48354 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:38:14.289641   48354 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:38:14.547221   48354 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:38:14.620974   48354 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:38:14.683017   48354 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:38:14.795856   48354 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:38:14.811749   48354 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:38:14.811977   48354 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:38:14.812040   48354 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:38:14.975013   48354 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:38:14.977020   48354 out.go:235]   - Booting up control plane ...
	I1202 12:38:14.977154   48354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:38:14.981816   48354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:38:14.982749   48354 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:38:14.986450   48354 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:38:14.988466   48354 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:38:54.988862   48354 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:38:54.989074   48354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:38:54.989355   48354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:38:59.989593   48354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:38:59.989836   48354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:39:09.990059   48354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:39:09.990316   48354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:39:29.991266   48354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:39:29.991491   48354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:40:09.993259   48354 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:40:09.993476   48354 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:40:09.993496   48354 kubeadm.go:310] 
	I1202 12:40:09.993535   48354 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:40:09.993584   48354 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:40:09.993592   48354 kubeadm.go:310] 
	I1202 12:40:09.993635   48354 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:40:09.993669   48354 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:40:09.993823   48354 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:40:09.993835   48354 kubeadm.go:310] 
	I1202 12:40:09.993985   48354 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:40:09.994056   48354 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:40:09.994105   48354 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:40:09.994115   48354 kubeadm.go:310] 
	I1202 12:40:09.994271   48354 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:40:09.994356   48354 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:40:09.994364   48354 kubeadm.go:310] 
	I1202 12:40:09.994485   48354 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:40:09.994583   48354 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:40:09.994690   48354 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:40:09.994760   48354 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:40:09.994767   48354 kubeadm.go:310] 
	I1202 12:40:09.995101   48354 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:40:09.995195   48354 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:40:09.995283   48354 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:40:09.995361   48354 kubeadm.go:394] duration metric: took 3m55.776885378s to StartCluster
	I1202 12:40:09.995423   48354 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:40:09.995495   48354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:40:10.036221   48354 cri.go:89] found id: ""
	I1202 12:40:10.036260   48354 logs.go:282] 0 containers: []
	W1202 12:40:10.036269   48354 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:40:10.036275   48354 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:40:10.036323   48354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:40:10.068761   48354 cri.go:89] found id: ""
	I1202 12:40:10.068782   48354 logs.go:282] 0 containers: []
	W1202 12:40:10.068794   48354 logs.go:284] No container was found matching "etcd"
	I1202 12:40:10.068800   48354 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:40:10.068856   48354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:40:10.102255   48354 cri.go:89] found id: ""
	I1202 12:40:10.102278   48354 logs.go:282] 0 containers: []
	W1202 12:40:10.102288   48354 logs.go:284] No container was found matching "coredns"
	I1202 12:40:10.102295   48354 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:40:10.102355   48354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:40:10.135573   48354 cri.go:89] found id: ""
	I1202 12:40:10.135605   48354 logs.go:282] 0 containers: []
	W1202 12:40:10.135616   48354 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:40:10.135623   48354 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:40:10.135676   48354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:40:10.172958   48354 cri.go:89] found id: ""
	I1202 12:40:10.172985   48354 logs.go:282] 0 containers: []
	W1202 12:40:10.172995   48354 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:40:10.173003   48354 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:40:10.173057   48354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:40:10.205716   48354 cri.go:89] found id: ""
	I1202 12:40:10.205742   48354 logs.go:282] 0 containers: []
	W1202 12:40:10.205752   48354 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:40:10.205760   48354 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:40:10.205821   48354 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:40:10.242826   48354 cri.go:89] found id: ""
	I1202 12:40:10.242861   48354 logs.go:282] 0 containers: []
	W1202 12:40:10.242871   48354 logs.go:284] No container was found matching "kindnet"
	I1202 12:40:10.242881   48354 logs.go:123] Gathering logs for dmesg ...
	I1202 12:40:10.242894   48354 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:40:10.256104   48354 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:40:10.256130   48354 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:40:10.366562   48354 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:40:10.366589   48354 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:40:10.366606   48354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:40:10.473392   48354 logs.go:123] Gathering logs for container status ...
	I1202 12:40:10.473425   48354 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:40:10.520718   48354 logs.go:123] Gathering logs for kubelet ...
	I1202 12:40:10.520744   48354 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1202 12:40:10.570851   48354 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1202 12:40:10.570921   48354 out.go:270] * 
	* 
	W1202 12:40:10.570975   48354 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:40:10.570992   48354 out.go:270] * 
	* 
	W1202 12:40:10.571748   48354 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 12:40:10.575108   48354 out.go:201] 
	W1202 12:40:10.576306   48354 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:40:10.576359   48354 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 12:40:10.576378   48354 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 12:40:10.577866   48354 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-127536 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-127536
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-127536: (2.281438355s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-127536 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-127536 status --format={{.Host}}: exit status 7 (61.756596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-127536 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-127536 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.273371684s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-127536 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-127536 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-127536 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.817512ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-127536] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-127536
	    minikube start -p kubernetes-upgrade-127536 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1275362 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-127536 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-127536 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-127536 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.525274491s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-12-02 12:41:31.918083712 +0000 UTC m=+4270.749868729
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-127536 -n kubernetes-upgrade-127536
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-127536 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-127536 logs -n 25: (1.666253893s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-256954 sudo                                 | cilium-256954             | jenkins | v1.34.0 | 02 Dec 24 12:37 UTC |                     |
	|         | systemctl cat crio --no-pager                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-256954 sudo find                            | cilium-256954             | jenkins | v1.34.0 | 02 Dec 24 12:37 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-256954 sudo crio                            | cilium-256954             | jenkins | v1.34.0 | 02 Dec 24 12:37 UTC |                     |
	|         | config                                                |                           |         |         |                     |                     |
	| delete  | -p cilium-256954                                      | cilium-256954             | jenkins | v1.34.0 | 02 Dec 24 12:37 UTC | 02 Dec 24 12:37 UTC |
	| start   | -p cert-expiration-424616                             | cert-expiration-424616    | jenkins | v1.34.0 | 02 Dec 24 12:37 UTC | 02 Dec 24 12:38 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-405664 sudo                           | NoKubernetes-405664       | jenkins | v1.34.0 | 02 Dec 24 12:37 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-405664                                | NoKubernetes-405664       | jenkins | v1.34.0 | 02 Dec 24 12:37 UTC | 02 Dec 24 12:37 UTC |
	| start   | -p force-systemd-flag-615809                          | force-systemd-flag-615809 | jenkins | v1.34.0 | 02 Dec 24 12:37 UTC | 02 Dec 24 12:39 UTC |
	|         | --memory=2048 --force-systemd                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-994398 stop                           | minikube                  | jenkins | v1.26.0 | 02 Dec 24 12:37 UTC | 02 Dec 24 12:37 UTC |
	| start   | -p stopped-upgrade-994398                             | stopped-upgrade-994398    | jenkins | v1.34.0 | 02 Dec 24 12:37 UTC | 02 Dec 24 12:39 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-615809 ssh cat                     | force-systemd-flag-615809 | jenkins | v1.34.0 | 02 Dec 24 12:39 UTC | 02 Dec 24 12:39 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-615809                          | force-systemd-flag-615809 | jenkins | v1.34.0 | 02 Dec 24 12:39 UTC | 02 Dec 24 12:39 UTC |
	| start   | -p cert-options-536755                                | cert-options-536755       | jenkins | v1.34.0 | 02 Dec 24 12:39 UTC | 02 Dec 24 12:39 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-994398                             | stopped-upgrade-994398    | jenkins | v1.34.0 | 02 Dec 24 12:39 UTC | 02 Dec 24 12:39 UTC |
	| start   | -p old-k8s-version-666766                             | old-k8s-version-666766    | jenkins | v1.34.0 | 02 Dec 24 12:39 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| ssh     | cert-options-536755 ssh                               | cert-options-536755       | jenkins | v1.34.0 | 02 Dec 24 12:39 UTC | 02 Dec 24 12:39 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-536755 -- sudo                        | cert-options-536755       | jenkins | v1.34.0 | 02 Dec 24 12:39 UTC | 02 Dec 24 12:39 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-536755                                | cert-options-536755       | jenkins | v1.34.0 | 02 Dec 24 12:39 UTC | 02 Dec 24 12:39 UTC |
	| start   | -p no-preload-658679                                  | no-preload-658679         | jenkins | v1.34.0 | 02 Dec 24 12:39 UTC | 02 Dec 24 12:40 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-127536                          | kubernetes-upgrade-127536 | jenkins | v1.34.0 | 02 Dec 24 12:40 UTC | 02 Dec 24 12:40 UTC |
	| start   | -p kubernetes-upgrade-127536                          | kubernetes-upgrade-127536 | jenkins | v1.34.0 | 02 Dec 24 12:40 UTC | 02 Dec 24 12:41 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-127536                          | kubernetes-upgrade-127536 | jenkins | v1.34.0 | 02 Dec 24 12:41 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-127536                          | kubernetes-upgrade-127536 | jenkins | v1.34.0 | 02 Dec 24 12:41 UTC | 02 Dec 24 12:41 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-658679            | no-preload-658679         | jenkins | v1.34.0 | 02 Dec 24 12:41 UTC | 02 Dec 24 12:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-658679                                  | no-preload-658679         | jenkins | v1.34.0 | 02 Dec 24 12:41 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 12:41:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 12:41:01.433953   55875 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:41:01.434063   55875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:41:01.434070   55875 out.go:358] Setting ErrFile to fd 2...
	I1202 12:41:01.434074   55875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:41:01.434231   55875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:41:01.434721   55875 out.go:352] Setting JSON to false
	I1202 12:41:01.435591   55875 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5013,"bootTime":1733138248,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:41:01.435641   55875 start.go:139] virtualization: kvm guest
	I1202 12:41:01.437418   55875 out.go:177] * [kubernetes-upgrade-127536] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:41:01.438623   55875 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:41:01.438653   55875 notify.go:220] Checking for updates...
	I1202 12:41:01.440742   55875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:41:01.442048   55875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:41:01.443167   55875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:41:01.444246   55875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:41:01.445278   55875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:41:01.446662   55875 config.go:182] Loaded profile config "kubernetes-upgrade-127536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:41:01.447016   55875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:41:01.447099   55875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:41:01.462017   55875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I1202 12:41:01.462471   55875 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:41:01.462960   55875 main.go:141] libmachine: Using API Version  1
	I1202 12:41:01.462981   55875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:41:01.463270   55875 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:41:01.463425   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:41:01.463626   55875 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:41:01.463890   55875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:41:01.463922   55875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:41:01.478551   55875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42339
	I1202 12:41:01.478990   55875 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:41:01.479464   55875 main.go:141] libmachine: Using API Version  1
	I1202 12:41:01.479483   55875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:41:01.479844   55875 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:41:01.480007   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:41:01.514584   55875 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 12:41:01.515618   55875 start.go:297] selected driver: kvm2
	I1202 12:41:01.515632   55875 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-127536 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-127536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.153 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:41:01.515749   55875 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:41:01.516456   55875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:41:01.516533   55875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 12:41:01.531422   55875 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 12:41:01.531807   55875 cni.go:84] Creating CNI manager for ""
	I1202 12:41:01.531855   55875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:41:01.531884   55875 start.go:340] cluster config:
	{Name:kubernetes-upgrade-127536 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-127536 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.153 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:41:01.531989   55875 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:41:01.534074   55875 out.go:177] * Starting "kubernetes-upgrade-127536" primary control-plane node in "kubernetes-upgrade-127536" cluster
	I1202 12:41:01.535128   55875 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:41:01.535154   55875 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 12:41:01.535161   55875 cache.go:56] Caching tarball of preloaded images
	I1202 12:41:01.535222   55875 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 12:41:01.535233   55875 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 12:41:01.535346   55875 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/config.json ...
	I1202 12:41:01.535555   55875 start.go:360] acquireMachinesLock for kubernetes-upgrade-127536: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:41:01.535605   55875 start.go:364] duration metric: took 30.643µs to acquireMachinesLock for "kubernetes-upgrade-127536"
	I1202 12:41:01.535625   55875 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:41:01.535634   55875 fix.go:54] fixHost starting: 
	I1202 12:41:01.535911   55875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:41:01.535952   55875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:41:01.549779   55875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37729
	I1202 12:41:01.550216   55875 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:41:01.550666   55875 main.go:141] libmachine: Using API Version  1
	I1202 12:41:01.550686   55875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:41:01.551039   55875 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:41:01.551270   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:41:01.551418   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetState
	I1202 12:41:01.552868   55875 fix.go:112] recreateIfNeeded on kubernetes-upgrade-127536: state=Running err=<nil>
	W1202 12:41:01.552899   55875 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:41:01.554346   55875 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-127536" VM ...
	I1202 12:40:59.959134   54652 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:40:59.959378   54652 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:41:01.555386   55875 machine.go:93] provisionDockerMachine start ...
	I1202 12:41:01.555400   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:41:01.555558   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:41:01.557841   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:01.558243   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:01.558271   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:01.558407   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:41:01.558584   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:01.558729   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:01.558836   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:41:01.558937   55875 main.go:141] libmachine: Using SSH client type: native
	I1202 12:41:01.559109   55875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I1202 12:41:01.559120   55875 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:41:01.673025   55875 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-127536
	
	I1202 12:41:01.673059   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetMachineName
	I1202 12:41:01.673285   55875 buildroot.go:166] provisioning hostname "kubernetes-upgrade-127536"
	I1202 12:41:01.673336   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetMachineName
	I1202 12:41:01.673564   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:41:01.676254   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:01.676665   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:01.676695   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:01.676820   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:41:01.676982   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:01.677118   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:01.677255   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:41:01.677439   55875 main.go:141] libmachine: Using SSH client type: native
	I1202 12:41:01.677606   55875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I1202 12:41:01.677619   55875 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-127536 && echo "kubernetes-upgrade-127536" | sudo tee /etc/hostname
	I1202 12:41:01.789991   55875 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-127536
	
	I1202 12:41:01.790019   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:41:01.792628   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:01.792975   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:01.793005   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:01.793171   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:41:01.793333   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:01.793535   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:01.793663   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:41:01.793824   55875 main.go:141] libmachine: Using SSH client type: native
	I1202 12:41:01.794035   55875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I1202 12:41:01.794056   55875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-127536' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-127536/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-127536' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 12:41:01.921897   55875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:41:01.921929   55875 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 12:41:01.921975   55875 buildroot.go:174] setting up certificates
	I1202 12:41:01.921990   55875 provision.go:84] configureAuth start
	I1202 12:41:01.922009   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetMachineName
	I1202 12:41:01.922361   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetIP
	I1202 12:41:01.925636   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:01.926046   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:01.926077   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:01.926206   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:41:01.929037   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:01.929462   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:01.929499   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:01.929638   55875 provision.go:143] copyHostCerts
	I1202 12:41:01.929688   55875 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 12:41:01.929698   55875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:41:01.929771   55875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 12:41:01.929883   55875 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 12:41:01.929893   55875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:41:01.929913   55875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 12:41:01.929975   55875 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 12:41:01.929982   55875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:41:01.930000   55875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 12:41:01.930047   55875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-127536 san=[127.0.0.1 192.168.72.153 kubernetes-upgrade-127536 localhost minikube]
	I1202 12:41:02.126600   55875 provision.go:177] copyRemoteCerts
	I1202 12:41:02.126667   55875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 12:41:02.126690   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:41:02.129229   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:02.129548   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:02.129585   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:02.129729   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:41:02.129936   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:02.130094   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:41:02.130219   55875 sshutil.go:53] new ssh client: &{IP:192.168.72.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/id_rsa Username:docker}
	I1202 12:41:02.236723   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1202 12:41:02.385661   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 12:41:02.433266   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 12:41:02.472924   55875 provision.go:87] duration metric: took 550.915838ms to configureAuth
	I1202 12:41:02.472955   55875 buildroot.go:189] setting minikube options for container-runtime
	I1202 12:41:02.473157   55875 config.go:182] Loaded profile config "kubernetes-upgrade-127536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:41:02.473251   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:41:02.476193   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:02.476653   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:02.476683   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:02.476893   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:41:02.477063   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:02.477230   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:02.477365   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:41:02.477471   55875 main.go:141] libmachine: Using SSH client type: native
	I1202 12:41:02.477653   55875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I1202 12:41:02.477668   55875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 12:41:03.662996   55875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 12:41:03.663023   55875 machine.go:96] duration metric: took 2.10762445s to provisionDockerMachine
	I1202 12:41:03.663038   55875 start.go:293] postStartSetup for "kubernetes-upgrade-127536" (driver="kvm2")
	I1202 12:41:03.663051   55875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 12:41:03.663078   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:41:03.663445   55875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 12:41:03.663486   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:41:03.665760   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:03.666049   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:03.666080   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:03.666218   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:41:03.666416   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:03.666589   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:41:03.666755   55875 sshutil.go:53] new ssh client: &{IP:192.168.72.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/id_rsa Username:docker}
	I1202 12:41:03.752735   55875 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 12:41:03.757535   55875 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 12:41:03.757556   55875 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 12:41:03.757610   55875 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 12:41:03.757699   55875 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 12:41:03.757819   55875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 12:41:03.767662   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:41:03.791873   55875 start.go:296] duration metric: took 128.823045ms for postStartSetup
	I1202 12:41:03.791913   55875 fix.go:56] duration metric: took 2.256278744s for fixHost
	I1202 12:41:03.791939   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:41:03.794441   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:03.794832   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:03.794859   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:03.795002   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:41:03.795185   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:03.795316   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:03.795461   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:41:03.795624   55875 main.go:141] libmachine: Using SSH client type: native
	I1202 12:41:03.795783   55875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.153 22 <nil> <nil>}
	I1202 12:41:03.795801   55875 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 12:41:03.896841   55875 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733143263.854991129
	
	I1202 12:41:03.896861   55875 fix.go:216] guest clock: 1733143263.854991129
	I1202 12:41:03.896869   55875 fix.go:229] Guest: 2024-12-02 12:41:03.854991129 +0000 UTC Remote: 2024-12-02 12:41:03.791918853 +0000 UTC m=+2.394678561 (delta=63.072276ms)
	I1202 12:41:03.896893   55875 fix.go:200] guest clock delta is within tolerance: 63.072276ms
	I1202 12:41:03.896899   55875 start.go:83] releasing machines lock for "kubernetes-upgrade-127536", held for 2.361281892s
	I1202 12:41:03.896918   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:41:03.897167   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetIP
	I1202 12:41:03.899879   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:03.900224   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:03.900293   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:03.900410   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:41:03.900847   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:41:03.901023   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .DriverName
	I1202 12:41:03.901107   55875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 12:41:03.901145   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:41:03.901203   55875 ssh_runner.go:195] Run: cat /version.json
	I1202 12:41:03.901226   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHHostname
	I1202 12:41:03.903928   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:03.904044   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:03.904288   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:03.904317   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:03.904437   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:41:03.904440   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:03.904456   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:03.904627   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:03.904627   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHPort
	I1202 12:41:03.904795   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:41:03.904813   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHKeyPath
	I1202 12:41:03.904960   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetSSHUsername
	I1202 12:41:03.904968   55875 sshutil.go:53] new ssh client: &{IP:192.168.72.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/id_rsa Username:docker}
	I1202 12:41:03.905057   55875 sshutil.go:53] new ssh client: &{IP:192.168.72.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kubernetes-upgrade-127536/id_rsa Username:docker}
	I1202 12:41:04.001535   55875 ssh_runner.go:195] Run: systemctl --version
	I1202 12:41:04.007699   55875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 12:41:04.164251   55875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 12:41:04.170656   55875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 12:41:04.170737   55875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 12:41:04.180271   55875 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 12:41:04.180295   55875 start.go:495] detecting cgroup driver to use...
	I1202 12:41:04.180371   55875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 12:41:04.196887   55875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 12:41:04.215886   55875 docker.go:217] disabling cri-docker service (if available) ...
	I1202 12:41:04.215930   55875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 12:41:04.230196   55875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 12:41:04.245287   55875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 12:41:04.427331   55875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 12:41:04.671214   55875 docker.go:233] disabling docker service ...
	I1202 12:41:04.671285   55875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 12:41:04.865978   55875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 12:41:04.921401   55875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 12:41:05.449074   55875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 12:41:05.799333   55875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 12:41:05.829865   55875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 12:41:05.920831   55875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 12:41:05.920903   55875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:41:05.943995   55875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 12:41:05.944067   55875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:41:05.966510   55875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:41:05.981221   55875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:41:05.997027   55875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 12:41:06.009849   55875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:41:06.025535   55875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:41:06.038191   55875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:41:06.054493   55875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 12:41:06.068807   55875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 12:41:06.083971   55875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:41:06.354044   55875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 12:41:06.997615   55875 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 12:41:06.997707   55875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 12:41:07.002746   55875 start.go:563] Will wait 60s for crictl version
	I1202 12:41:07.002802   55875 ssh_runner.go:195] Run: which crictl
	I1202 12:41:07.006650   55875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 12:41:07.046195   55875 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 12:41:07.046270   55875 ssh_runner.go:195] Run: crio --version
	I1202 12:41:07.078777   55875 ssh_runner.go:195] Run: crio --version
	I1202 12:41:07.113249   55875 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 12:41:07.114481   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) Calling .GetIP
	I1202 12:41:07.117499   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:07.117905   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:a8:26", ip: ""} in network mk-kubernetes-upgrade-127536: {Iface:virbr4 ExpiryTime:2024-12-02 13:40:30 +0000 UTC Type:0 Mac:52:54:00:b3:a8:26 Iaid: IPaddr:192.168.72.153 Prefix:24 Hostname:kubernetes-upgrade-127536 Clientid:01:52:54:00:b3:a8:26}
	I1202 12:41:07.117937   55875 main.go:141] libmachine: (kubernetes-upgrade-127536) DBG | domain kubernetes-upgrade-127536 has defined IP address 192.168.72.153 and MAC address 52:54:00:b3:a8:26 in network mk-kubernetes-upgrade-127536
	I1202 12:41:07.118140   55875 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1202 12:41:07.122552   55875 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-127536 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-127536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.153 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 12:41:07.122636   55875 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:41:07.122673   55875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:41:07.161972   55875 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 12:41:07.161994   55875 crio.go:433] Images already preloaded, skipping extraction
	I1202 12:41:07.162039   55875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:41:07.195320   55875 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 12:41:07.195341   55875 cache_images.go:84] Images are preloaded, skipping loading
	I1202 12:41:07.195349   55875 kubeadm.go:934] updating node { 192.168.72.153 8443 v1.31.2 crio true true} ...
	I1202 12:41:07.195445   55875 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-127536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-127536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 12:41:07.195502   55875 ssh_runner.go:195] Run: crio config
	I1202 12:41:07.243604   55875 cni.go:84] Creating CNI manager for ""
	I1202 12:41:07.243632   55875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:41:07.243643   55875 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 12:41:07.243671   55875 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.153 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-127536 NodeName:kubernetes-upgrade-127536 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 12:41:07.243822   55875 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-127536"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.153"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.153"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 12:41:07.243895   55875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 12:41:07.254372   55875 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 12:41:07.254447   55875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 12:41:07.263759   55875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1202 12:41:07.280331   55875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 12:41:07.297682   55875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1202 12:41:07.314532   55875 ssh_runner.go:195] Run: grep 192.168.72.153	control-plane.minikube.internal$ /etc/hosts
	I1202 12:41:07.318540   55875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:41:07.468916   55875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:41:07.483593   55875 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536 for IP: 192.168.72.153
	I1202 12:41:07.483612   55875 certs.go:194] generating shared ca certs ...
	I1202 12:41:07.483627   55875 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:41:07.483782   55875 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 12:41:07.483824   55875 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 12:41:07.483831   55875 certs.go:256] generating profile certs ...
	I1202 12:41:07.483912   55875 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/client.key
	I1202 12:41:07.483955   55875 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.key.2834edd1
	I1202 12:41:07.483994   55875 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/proxy-client.key
	I1202 12:41:07.484114   55875 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 12:41:07.484153   55875 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 12:41:07.484162   55875 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 12:41:07.484182   55875 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 12:41:07.484204   55875 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 12:41:07.484225   55875 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 12:41:07.484296   55875 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:41:07.484945   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 12:41:07.510125   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 12:41:07.577768   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 12:41:07.632291   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 12:41:07.848683   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1202 12:41:08.092541   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 12:41:08.183150   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 12:41:08.226270   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kubernetes-upgrade-127536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 12:41:08.265751   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 12:41:08.307960   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 12:41:08.365721   55875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 12:41:08.404254   55875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 12:41:08.443688   55875 ssh_runner.go:195] Run: openssl version
	I1202 12:41:08.463984   55875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 12:41:08.502787   55875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 12:41:08.518018   55875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:41:08.518078   55875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 12:41:08.527456   55875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 12:41:08.605386   55875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 12:41:08.625844   55875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:41:08.634990   55875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:41:08.635055   55875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:41:08.650966   55875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 12:41:08.663035   55875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 12:41:08.678289   55875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 12:41:08.683372   55875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:41:08.683421   55875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 12:41:08.760176   55875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 12:41:08.770489   55875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:41:08.775400   55875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 12:41:08.784482   55875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 12:41:08.791532   55875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 12:41:08.797605   55875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 12:41:08.803772   55875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 12:41:08.809725   55875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 12:41:08.815498   55875 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-127536 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-127536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.153 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:41:08.815598   55875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 12:41:08.815660   55875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:41:08.870834   55875 cri.go:89] found id: "6f8018f9ab2f5f4b155e78870e2cb4370c883f5038391dd7e78d0ec14f6636cc"
	I1202 12:41:08.870861   55875 cri.go:89] found id: "e2a65d792064377d94f49f28606b42ddf6a014c9193884b35be090e5fa7e81d8"
	I1202 12:41:08.870867   55875 cri.go:89] found id: "1894f562084eb4d7bfd668d03db8105f92c44eeb6cb08f6f25eb9f826648e793"
	I1202 12:41:08.870872   55875 cri.go:89] found id: "bdfb604534f18e1140f5de1b24c6bfe3169de4ac6b348b5536903bf838a76594"
	I1202 12:41:08.870877   55875 cri.go:89] found id: "0ce7eef2d0c2e24b1abe8f0ec9df1afa9ccf242fa1b2ccb6242c93fa6fd20723"
	I1202 12:41:08.870882   55875 cri.go:89] found id: "2dfbcd1f6ec6fa9a6723d15baa53cfa2056c4509f001c7f447822242a7a9d808"
	I1202 12:41:08.870886   55875 cri.go:89] found id: "9e5e2b88e9fa554cec90a38333d95b943a07e85245d0b678e6e29c844d100bed"
	I1202 12:41:08.870891   55875 cri.go:89] found id: "d42da5356df5526fe1967943bb03bb9a42b56243dc7816aea6542a35acf63d8e"
	I1202 12:41:08.870895   55875 cri.go:89] found id: "d3a99714ac4ed26d6e5bcb757b82de31fcc4d6227a75481a0acc525c370f589f"
	I1202 12:41:08.870905   55875 cri.go:89] found id: ""
	I1202 12:41:08.870957   55875 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-127536 -n kubernetes-upgrade-127536
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-127536 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-127536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-127536
--- FAIL: TestKubernetesUpgrade (399.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (270.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-666766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1202 12:39:44.446670   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-666766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m30.325210191s)

                                                
                                                
-- stdout --
	* [old-k8s-version-666766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-666766" primary control-plane node in "old-k8s-version-666766" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 12:39:27.524285   54652 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:39:27.524408   54652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:39:27.524419   54652 out.go:358] Setting ErrFile to fd 2...
	I1202 12:39:27.524426   54652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:39:27.524682   54652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:39:27.525446   54652 out.go:352] Setting JSON to false
	I1202 12:39:27.526747   54652 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4919,"bootTime":1733138248,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:39:27.526874   54652 start.go:139] virtualization: kvm guest
	I1202 12:39:27.529068   54652 out.go:177] * [old-k8s-version-666766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:39:27.530363   54652 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:39:27.530381   54652 notify.go:220] Checking for updates...
	I1202 12:39:27.532505   54652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:39:27.533715   54652 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:39:27.534911   54652 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:39:27.535992   54652 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:39:27.536971   54652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:39:27.538422   54652 config.go:182] Loaded profile config "cert-expiration-424616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:39:27.538539   54652 config.go:182] Loaded profile config "cert-options-536755": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:39:27.538613   54652 config.go:182] Loaded profile config "kubernetes-upgrade-127536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1202 12:39:27.538684   54652 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:39:27.575897   54652 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 12:39:27.577037   54652 start.go:297] selected driver: kvm2
	I1202 12:39:27.577052   54652 start.go:901] validating driver "kvm2" against <nil>
	I1202 12:39:27.577062   54652 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:39:27.577695   54652 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:39:27.577749   54652 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 12:39:27.592874   54652 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 12:39:27.592946   54652 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 12:39:27.593262   54652 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:39:27.593297   54652 cni.go:84] Creating CNI manager for ""
	I1202 12:39:27.593352   54652 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:39:27.593363   54652 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1202 12:39:27.593443   54652 start.go:340] cluster config:
	{Name:old-k8s-version-666766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-666766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:39:27.593550   54652 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:39:27.595980   54652 out.go:177] * Starting "old-k8s-version-666766" primary control-plane node in "old-k8s-version-666766" cluster
	I1202 12:39:27.597156   54652 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1202 12:39:27.597198   54652 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1202 12:39:27.597214   54652 cache.go:56] Caching tarball of preloaded images
	I1202 12:39:27.597298   54652 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 12:39:27.597311   54652 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1202 12:39:27.597409   54652 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/config.json ...
	I1202 12:39:27.597435   54652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/config.json: {Name:mk05200316d1d5ab4813fddc5d7a79bdf8e6f30d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:39:27.597576   54652 start.go:360] acquireMachinesLock for old-k8s-version-666766: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:39:27.597611   54652 start.go:364] duration metric: took 19.483µs to acquireMachinesLock for "old-k8s-version-666766"
	I1202 12:39:27.597633   54652 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-666766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-666766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 12:39:27.597713   54652 start.go:125] createHost starting for "" (driver="kvm2")
	I1202 12:39:27.599398   54652 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1202 12:39:27.599581   54652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:39:27.599629   54652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:39:27.613616   54652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45643
	I1202 12:39:27.614102   54652 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:39:27.614626   54652 main.go:141] libmachine: Using API Version  1
	I1202 12:39:27.614650   54652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:39:27.615056   54652 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:39:27.615226   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetMachineName
	I1202 12:39:27.615370   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:39:27.615524   54652 start.go:159] libmachine.API.Create for "old-k8s-version-666766" (driver="kvm2")
	I1202 12:39:27.615554   54652 client.go:168] LocalClient.Create starting
	I1202 12:39:27.615582   54652 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 12:39:27.615612   54652 main.go:141] libmachine: Decoding PEM data...
	I1202 12:39:27.615628   54652 main.go:141] libmachine: Parsing certificate...
	I1202 12:39:27.615676   54652 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 12:39:27.615693   54652 main.go:141] libmachine: Decoding PEM data...
	I1202 12:39:27.615709   54652 main.go:141] libmachine: Parsing certificate...
	I1202 12:39:27.615724   54652 main.go:141] libmachine: Running pre-create checks...
	I1202 12:39:27.615733   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .PreCreateCheck
	I1202 12:39:27.616045   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetConfigRaw
	I1202 12:39:27.616455   54652 main.go:141] libmachine: Creating machine...
	I1202 12:39:27.616468   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .Create
	I1202 12:39:27.616579   54652 main.go:141] libmachine: (old-k8s-version-666766) Creating KVM machine...
	I1202 12:39:27.617773   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found existing default KVM network
	I1202 12:39:27.618998   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:27.618871   54675 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f6:4c:48} reservation:<nil>}
	I1202 12:39:27.620080   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:27.620008   54675 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00034a0b0}
	I1202 12:39:27.620099   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | created network xml: 
	I1202 12:39:27.620110   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | <network>
	I1202 12:39:27.620117   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG |   <name>mk-old-k8s-version-666766</name>
	I1202 12:39:27.620145   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG |   <dns enable='no'/>
	I1202 12:39:27.620167   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG |   
	I1202 12:39:27.620183   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1202 12:39:27.620194   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG |     <dhcp>
	I1202 12:39:27.620242   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1202 12:39:27.620262   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG |     </dhcp>
	I1202 12:39:27.620275   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG |   </ip>
	I1202 12:39:27.620290   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG |   
	I1202 12:39:27.620301   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | </network>
	I1202 12:39:27.620308   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | 
	I1202 12:39:27.624901   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | trying to create private KVM network mk-old-k8s-version-666766 192.168.50.0/24...
	I1202 12:39:27.693448   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | private KVM network mk-old-k8s-version-666766 192.168.50.0/24 created
	I1202 12:39:27.693498   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:27.693412   54675 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:39:27.693512   54652 main.go:141] libmachine: (old-k8s-version-666766) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766 ...
	I1202 12:39:27.693584   54652 main.go:141] libmachine: (old-k8s-version-666766) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 12:39:27.693624   54652 main.go:141] libmachine: (old-k8s-version-666766) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 12:39:27.944775   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:27.944632   54675 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa...
	I1202 12:39:28.141092   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:28.140900   54675 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/old-k8s-version-666766.rawdisk...
	I1202 12:39:28.141128   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Writing magic tar header
	I1202 12:39:28.141148   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Writing SSH key tar header
	I1202 12:39:28.141167   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:28.141061   54675 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766 ...
	I1202 12:39:28.141185   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766
	I1202 12:39:28.141244   54652 main.go:141] libmachine: (old-k8s-version-666766) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766 (perms=drwx------)
	I1202 12:39:28.141286   54652 main.go:141] libmachine: (old-k8s-version-666766) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 12:39:28.141310   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 12:39:28.141325   54652 main.go:141] libmachine: (old-k8s-version-666766) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 12:39:28.141339   54652 main.go:141] libmachine: (old-k8s-version-666766) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 12:39:28.141357   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:39:28.141387   54652 main.go:141] libmachine: (old-k8s-version-666766) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 12:39:28.141404   54652 main.go:141] libmachine: (old-k8s-version-666766) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 12:39:28.141446   54652 main.go:141] libmachine: (old-k8s-version-666766) Creating domain...
	I1202 12:39:28.141462   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 12:39:28.141472   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 12:39:28.141480   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Checking permissions on dir: /home/jenkins
	I1202 12:39:28.141495   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Checking permissions on dir: /home
	I1202 12:39:28.141507   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Skipping /home - not owner
	I1202 12:39:28.142467   54652 main.go:141] libmachine: (old-k8s-version-666766) define libvirt domain using xml: 
	I1202 12:39:28.142491   54652 main.go:141] libmachine: (old-k8s-version-666766) <domain type='kvm'>
	I1202 12:39:28.142502   54652 main.go:141] libmachine: (old-k8s-version-666766)   <name>old-k8s-version-666766</name>
	I1202 12:39:28.142512   54652 main.go:141] libmachine: (old-k8s-version-666766)   <memory unit='MiB'>2200</memory>
	I1202 12:39:28.142524   54652 main.go:141] libmachine: (old-k8s-version-666766)   <vcpu>2</vcpu>
	I1202 12:39:28.142535   54652 main.go:141] libmachine: (old-k8s-version-666766)   <features>
	I1202 12:39:28.142543   54652 main.go:141] libmachine: (old-k8s-version-666766)     <acpi/>
	I1202 12:39:28.142551   54652 main.go:141] libmachine: (old-k8s-version-666766)     <apic/>
	I1202 12:39:28.142560   54652 main.go:141] libmachine: (old-k8s-version-666766)     <pae/>
	I1202 12:39:28.142576   54652 main.go:141] libmachine: (old-k8s-version-666766)     
	I1202 12:39:28.142599   54652 main.go:141] libmachine: (old-k8s-version-666766)   </features>
	I1202 12:39:28.142611   54652 main.go:141] libmachine: (old-k8s-version-666766)   <cpu mode='host-passthrough'>
	I1202 12:39:28.142622   54652 main.go:141] libmachine: (old-k8s-version-666766)   
	I1202 12:39:28.142630   54652 main.go:141] libmachine: (old-k8s-version-666766)   </cpu>
	I1202 12:39:28.142640   54652 main.go:141] libmachine: (old-k8s-version-666766)   <os>
	I1202 12:39:28.142651   54652 main.go:141] libmachine: (old-k8s-version-666766)     <type>hvm</type>
	I1202 12:39:28.142682   54652 main.go:141] libmachine: (old-k8s-version-666766)     <boot dev='cdrom'/>
	I1202 12:39:28.142703   54652 main.go:141] libmachine: (old-k8s-version-666766)     <boot dev='hd'/>
	I1202 12:39:28.142723   54652 main.go:141] libmachine: (old-k8s-version-666766)     <bootmenu enable='no'/>
	I1202 12:39:28.142733   54652 main.go:141] libmachine: (old-k8s-version-666766)   </os>
	I1202 12:39:28.142743   54652 main.go:141] libmachine: (old-k8s-version-666766)   <devices>
	I1202 12:39:28.142750   54652 main.go:141] libmachine: (old-k8s-version-666766)     <disk type='file' device='cdrom'>
	I1202 12:39:28.142762   54652 main.go:141] libmachine: (old-k8s-version-666766)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/boot2docker.iso'/>
	I1202 12:39:28.142771   54652 main.go:141] libmachine: (old-k8s-version-666766)       <target dev='hdc' bus='scsi'/>
	I1202 12:39:28.142778   54652 main.go:141] libmachine: (old-k8s-version-666766)       <readonly/>
	I1202 12:39:28.142786   54652 main.go:141] libmachine: (old-k8s-version-666766)     </disk>
	I1202 12:39:28.142793   54652 main.go:141] libmachine: (old-k8s-version-666766)     <disk type='file' device='disk'>
	I1202 12:39:28.142804   54652 main.go:141] libmachine: (old-k8s-version-666766)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 12:39:28.142823   54652 main.go:141] libmachine: (old-k8s-version-666766)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/old-k8s-version-666766.rawdisk'/>
	I1202 12:39:28.142831   54652 main.go:141] libmachine: (old-k8s-version-666766)       <target dev='hda' bus='virtio'/>
	I1202 12:39:28.142837   54652 main.go:141] libmachine: (old-k8s-version-666766)     </disk>
	I1202 12:39:28.142845   54652 main.go:141] libmachine: (old-k8s-version-666766)     <interface type='network'>
	I1202 12:39:28.142854   54652 main.go:141] libmachine: (old-k8s-version-666766)       <source network='mk-old-k8s-version-666766'/>
	I1202 12:39:28.142863   54652 main.go:141] libmachine: (old-k8s-version-666766)       <model type='virtio'/>
	I1202 12:39:28.142873   54652 main.go:141] libmachine: (old-k8s-version-666766)     </interface>
	I1202 12:39:28.142881   54652 main.go:141] libmachine: (old-k8s-version-666766)     <interface type='network'>
	I1202 12:39:28.142889   54652 main.go:141] libmachine: (old-k8s-version-666766)       <source network='default'/>
	I1202 12:39:28.142897   54652 main.go:141] libmachine: (old-k8s-version-666766)       <model type='virtio'/>
	I1202 12:39:28.142910   54652 main.go:141] libmachine: (old-k8s-version-666766)     </interface>
	I1202 12:39:28.142921   54652 main.go:141] libmachine: (old-k8s-version-666766)     <serial type='pty'>
	I1202 12:39:28.142931   54652 main.go:141] libmachine: (old-k8s-version-666766)       <target port='0'/>
	I1202 12:39:28.142942   54652 main.go:141] libmachine: (old-k8s-version-666766)     </serial>
	I1202 12:39:28.142952   54652 main.go:141] libmachine: (old-k8s-version-666766)     <console type='pty'>
	I1202 12:39:28.142969   54652 main.go:141] libmachine: (old-k8s-version-666766)       <target type='serial' port='0'/>
	I1202 12:39:28.142979   54652 main.go:141] libmachine: (old-k8s-version-666766)     </console>
	I1202 12:39:28.142987   54652 main.go:141] libmachine: (old-k8s-version-666766)     <rng model='virtio'>
	I1202 12:39:28.143000   54652 main.go:141] libmachine: (old-k8s-version-666766)       <backend model='random'>/dev/random</backend>
	I1202 12:39:28.143010   54652 main.go:141] libmachine: (old-k8s-version-666766)     </rng>
	I1202 12:39:28.143018   54652 main.go:141] libmachine: (old-k8s-version-666766)     
	I1202 12:39:28.143031   54652 main.go:141] libmachine: (old-k8s-version-666766)     
	I1202 12:39:28.143042   54652 main.go:141] libmachine: (old-k8s-version-666766)   </devices>
	I1202 12:39:28.143052   54652 main.go:141] libmachine: (old-k8s-version-666766) </domain>
	I1202 12:39:28.143062   54652 main.go:141] libmachine: (old-k8s-version-666766) 
	I1202 12:39:28.147412   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:69:a1:80 in network default
	I1202 12:39:28.147982   54652 main.go:141] libmachine: (old-k8s-version-666766) Ensuring networks are active...
	I1202 12:39:28.148008   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:28.148840   54652 main.go:141] libmachine: (old-k8s-version-666766) Ensuring network default is active
	I1202 12:39:28.149258   54652 main.go:141] libmachine: (old-k8s-version-666766) Ensuring network mk-old-k8s-version-666766 is active
	I1202 12:39:28.149876   54652 main.go:141] libmachine: (old-k8s-version-666766) Getting domain xml...
	I1202 12:39:28.150626   54652 main.go:141] libmachine: (old-k8s-version-666766) Creating domain...
	I1202 12:39:29.527206   54652 main.go:141] libmachine: (old-k8s-version-666766) Waiting to get IP...
	I1202 12:39:29.528184   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:29.528675   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:29.528730   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:29.528661   54675 retry.go:31] will retry after 190.321192ms: waiting for machine to come up
	I1202 12:39:29.721367   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:29.721982   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:29.722007   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:29.721935   54675 retry.go:31] will retry after 285.613513ms: waiting for machine to come up
	I1202 12:39:30.009566   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:30.010134   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:30.010162   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:30.010081   54675 retry.go:31] will retry after 409.831107ms: waiting for machine to come up
	I1202 12:39:30.421757   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:30.422257   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:30.422287   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:30.422202   54675 retry.go:31] will retry after 433.233223ms: waiting for machine to come up
	I1202 12:39:30.856748   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:30.857302   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:30.857329   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:30.857257   54675 retry.go:31] will retry after 725.626084ms: waiting for machine to come up
	I1202 12:39:31.585103   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:31.585612   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:31.585642   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:31.585571   54675 retry.go:31] will retry after 734.45519ms: waiting for machine to come up
	I1202 12:39:32.321416   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:32.321890   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:32.321917   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:32.321825   54675 retry.go:31] will retry after 851.588452ms: waiting for machine to come up
	I1202 12:39:33.175216   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:33.175653   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:33.175684   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:33.175621   54675 retry.go:31] will retry after 1.031273193s: waiting for machine to come up
	I1202 12:39:34.208591   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:34.209114   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:34.209152   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:34.209070   54675 retry.go:31] will retry after 1.165639018s: waiting for machine to come up
	I1202 12:39:35.375793   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:35.376437   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:35.376468   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:35.376368   54675 retry.go:31] will retry after 2.111714597s: waiting for machine to come up
	I1202 12:39:37.489277   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:37.489731   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:37.489760   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:37.489676   54675 retry.go:31] will retry after 2.641377676s: waiting for machine to come up
	I1202 12:39:40.134548   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:40.135040   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:40.135066   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:40.135011   54675 retry.go:31] will retry after 3.610403102s: waiting for machine to come up
	I1202 12:39:43.747212   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:43.747747   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:43.747774   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:43.747702   54675 retry.go:31] will retry after 4.370042575s: waiting for machine to come up
	I1202 12:39:48.119626   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:48.120130   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:39:48.120160   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:39:48.120060   54675 retry.go:31] will retry after 3.635357198s: waiting for machine to come up
	I1202 12:39:51.759762   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:51.760347   54652 main.go:141] libmachine: (old-k8s-version-666766) Found IP for machine: 192.168.50.171
	I1202 12:39:51.760390   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has current primary IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:51.760397   54652 main.go:141] libmachine: (old-k8s-version-666766) Reserving static IP address...
	I1202 12:39:51.760862   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-666766", mac: "52:54:00:79:ba:14", ip: "192.168.50.171"} in network mk-old-k8s-version-666766
	I1202 12:39:51.833770   54652 main.go:141] libmachine: (old-k8s-version-666766) Reserved static IP address: 192.168.50.171
	I1202 12:39:51.833802   54652 main.go:141] libmachine: (old-k8s-version-666766) Waiting for SSH to be available...
	I1202 12:39:51.833811   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Getting to WaitForSSH function...
	I1202 12:39:51.836622   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:51.837114   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:minikube Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:51.837153   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:51.837270   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Using SSH client type: external
	I1202 12:39:51.837298   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa (-rw-------)
	I1202 12:39:51.837338   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 12:39:51.837349   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | About to run SSH command:
	I1202 12:39:51.837361   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | exit 0
	I1202 12:39:51.963991   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | SSH cmd err, output: <nil>: 
	I1202 12:39:51.964379   54652 main.go:141] libmachine: (old-k8s-version-666766) KVM machine creation complete!
	I1202 12:39:51.964703   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetConfigRaw
	I1202 12:39:51.965343   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:39:51.965546   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:39:51.965733   54652 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 12:39:51.965752   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetState
	I1202 12:39:51.967204   54652 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 12:39:51.967216   54652 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 12:39:51.967220   54652 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 12:39:51.967226   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:39:51.969609   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:51.970076   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:51.970105   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:51.970282   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:39:51.970437   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:51.970555   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:51.970701   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:39:51.970842   54652 main.go:141] libmachine: Using SSH client type: native
	I1202 12:39:51.971025   54652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1202 12:39:51.971041   54652 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 12:39:52.079096   54652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:39:52.079123   54652 main.go:141] libmachine: Detecting the provisioner...
	I1202 12:39:52.079135   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:39:52.082875   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.083211   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:52.083238   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.083484   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:39:52.083691   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:52.083835   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:52.083971   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:39:52.084106   54652 main.go:141] libmachine: Using SSH client type: native
	I1202 12:39:52.084350   54652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1202 12:39:52.084365   54652 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 12:39:52.196902   54652 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 12:39:52.196977   54652 main.go:141] libmachine: found compatible host: buildroot
	I1202 12:39:52.196988   54652 main.go:141] libmachine: Provisioning with buildroot...
	I1202 12:39:52.196995   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetMachineName
	I1202 12:39:52.197236   54652 buildroot.go:166] provisioning hostname "old-k8s-version-666766"
	I1202 12:39:52.197258   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetMachineName
	I1202 12:39:52.197445   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:39:52.200042   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.200490   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:52.200514   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.200673   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:39:52.200843   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:52.201016   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:52.201183   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:39:52.201379   54652 main.go:141] libmachine: Using SSH client type: native
	I1202 12:39:52.201645   54652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1202 12:39:52.201660   54652 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-666766 && echo "old-k8s-version-666766" | sudo tee /etc/hostname
	I1202 12:39:52.328583   54652 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-666766
	
	I1202 12:39:52.328615   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:39:52.331202   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.331547   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:52.331573   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.331722   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:39:52.331928   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:52.332095   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:52.332307   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:39:52.332482   54652 main.go:141] libmachine: Using SSH client type: native
	I1202 12:39:52.332684   54652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1202 12:39:52.332703   54652 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-666766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-666766/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-666766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 12:39:52.455022   54652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:39:52.455052   54652 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 12:39:52.455108   54652 buildroot.go:174] setting up certificates
	I1202 12:39:52.455125   54652 provision.go:84] configureAuth start
	I1202 12:39:52.455138   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetMachineName
	I1202 12:39:52.455382   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetIP
	I1202 12:39:52.458186   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.458533   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:52.458563   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.458658   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:39:52.460623   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.460909   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:52.460939   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.461075   54652 provision.go:143] copyHostCerts
	I1202 12:39:52.461139   54652 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 12:39:52.461149   54652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:39:52.461201   54652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 12:39:52.461285   54652 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 12:39:52.461293   54652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:39:52.461312   54652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 12:39:52.461361   54652 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 12:39:52.461368   54652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:39:52.461386   54652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 12:39:52.461435   54652 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-666766 san=[127.0.0.1 192.168.50.171 localhost minikube old-k8s-version-666766]
	I1202 12:39:52.579425   54652 provision.go:177] copyRemoteCerts
	I1202 12:39:52.579477   54652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 12:39:52.579503   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:39:52.582113   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.582431   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:52.582459   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.582623   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:39:52.582775   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:52.582889   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:39:52.583003   54652 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa Username:docker}
	I1202 12:39:52.666043   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 12:39:52.689890   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1202 12:39:52.713186   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 12:39:52.736031   54652 provision.go:87] duration metric: took 280.894121ms to configureAuth
	I1202 12:39:52.736050   54652 buildroot.go:189] setting minikube options for container-runtime
	I1202 12:39:52.736222   54652 config.go:182] Loaded profile config "old-k8s-version-666766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1202 12:39:52.736326   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:39:52.738811   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.739105   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:52.739141   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.739304   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:39:52.739500   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:52.739635   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:52.739788   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:39:52.739922   54652 main.go:141] libmachine: Using SSH client type: native
	I1202 12:39:52.740073   54652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1202 12:39:52.740086   54652 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 12:39:52.969263   54652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 12:39:52.969294   54652 main.go:141] libmachine: Checking connection to Docker...
	I1202 12:39:52.969307   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetURL
	I1202 12:39:52.970593   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | Using libvirt version 6000000
	I1202 12:39:52.972640   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.972945   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:52.972967   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.973139   54652 main.go:141] libmachine: Docker is up and running!
	I1202 12:39:52.973160   54652 main.go:141] libmachine: Reticulating splines...
	I1202 12:39:52.973168   54652 client.go:171] duration metric: took 25.357606651s to LocalClient.Create
	I1202 12:39:52.973198   54652 start.go:167] duration metric: took 25.35767527s to libmachine.API.Create "old-k8s-version-666766"
	I1202 12:39:52.973212   54652 start.go:293] postStartSetup for "old-k8s-version-666766" (driver="kvm2")
	I1202 12:39:52.973226   54652 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 12:39:52.973249   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:39:52.973455   54652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 12:39:52.973477   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:39:52.975774   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.976095   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:52.976121   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:52.976287   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:39:52.976489   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:52.976638   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:39:52.976780   54652 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa Username:docker}
	I1202 12:39:53.063860   54652 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 12:39:53.068182   54652 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 12:39:53.068202   54652 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 12:39:53.068279   54652 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 12:39:53.068382   54652 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 12:39:53.068498   54652 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 12:39:53.079257   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:39:53.104304   54652 start.go:296] duration metric: took 131.080702ms for postStartSetup
	I1202 12:39:53.104340   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetConfigRaw
	I1202 12:39:53.105054   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetIP
	I1202 12:39:53.107875   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:53.108269   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:53.108296   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:53.108509   54652 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/config.json ...
	I1202 12:39:53.108754   54652 start.go:128] duration metric: took 25.511031351s to createHost
	I1202 12:39:53.108775   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:39:53.111007   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:53.111319   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:53.111340   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:53.111488   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:39:53.111658   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:53.111788   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:53.111944   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:39:53.112099   54652 main.go:141] libmachine: Using SSH client type: native
	I1202 12:39:53.112278   54652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1202 12:39:53.112290   54652 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 12:39:53.225012   54652 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733143193.197700519
	
	I1202 12:39:53.225034   54652 fix.go:216] guest clock: 1733143193.197700519
	I1202 12:39:53.225043   54652 fix.go:229] Guest: 2024-12-02 12:39:53.197700519 +0000 UTC Remote: 2024-12-02 12:39:53.108765991 +0000 UTC m=+25.623899571 (delta=88.934528ms)
	I1202 12:39:53.225083   54652 fix.go:200] guest clock delta is within tolerance: 88.934528ms
	I1202 12:39:53.225091   54652 start.go:83] releasing machines lock for "old-k8s-version-666766", held for 25.62747002s
	I1202 12:39:53.225130   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:39:53.225380   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetIP
	I1202 12:39:53.228538   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:53.228878   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:53.228911   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:53.229053   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:39:53.229690   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:39:53.229890   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:39:53.229998   54652 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 12:39:53.230043   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:39:53.230125   54652 ssh_runner.go:195] Run: cat /version.json
	I1202 12:39:53.230155   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:39:53.232965   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:53.233100   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:53.233291   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:53.233317   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:53.233598   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:53.233626   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:39:53.233683   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:53.233809   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:39:53.233820   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:53.233948   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:39:53.233948   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:39:53.234079   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:39:53.234158   54652 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa Username:docker}
	I1202 12:39:53.234225   54652 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa Username:docker}
	I1202 12:39:53.337727   54652 ssh_runner.go:195] Run: systemctl --version
	I1202 12:39:53.344114   54652 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 12:39:53.507092   54652 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 12:39:53.513572   54652 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 12:39:53.513639   54652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 12:39:53.533057   54652 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 12:39:53.533076   54652 start.go:495] detecting cgroup driver to use...
	I1202 12:39:53.533129   54652 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 12:39:53.554232   54652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 12:39:53.569316   54652 docker.go:217] disabling cri-docker service (if available) ...
	I1202 12:39:53.569374   54652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 12:39:53.586265   54652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 12:39:53.603632   54652 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 12:39:53.741141   54652 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 12:39:53.914516   54652 docker.go:233] disabling docker service ...
	I1202 12:39:53.914585   54652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 12:39:53.928881   54652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 12:39:53.941888   54652 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 12:39:54.069606   54652 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 12:39:54.198418   54652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 12:39:54.212630   54652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 12:39:54.231001   54652 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1202 12:39:54.231078   54652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:39:54.241533   54652 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 12:39:54.241578   54652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:39:54.252753   54652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:39:54.263300   54652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:39:54.274150   54652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 12:39:54.284863   54652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 12:39:54.294513   54652 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 12:39:54.294570   54652 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 12:39:54.307976   54652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 12:39:54.317803   54652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:39:54.445737   54652 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 12:39:54.533851   54652 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 12:39:54.533926   54652 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 12:39:54.539086   54652 start.go:563] Will wait 60s for crictl version
	I1202 12:39:54.539146   54652 ssh_runner.go:195] Run: which crictl
	I1202 12:39:54.543062   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 12:39:54.588061   54652 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 12:39:54.588145   54652 ssh_runner.go:195] Run: crio --version
	I1202 12:39:54.622815   54652 ssh_runner.go:195] Run: crio --version
	I1202 12:39:54.659352   54652 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1202 12:39:54.660627   54652 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetIP
	I1202 12:39:54.664276   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:54.664800   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:39:43 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:39:54.664824   54652 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:39:54.665060   54652 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1202 12:39:54.669821   54652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:39:54.684621   54652 kubeadm.go:883] updating cluster {Name:old-k8s-version-666766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-666766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 12:39:54.684751   54652 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1202 12:39:54.684816   54652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:39:54.722208   54652 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1202 12:39:54.722266   54652 ssh_runner.go:195] Run: which lz4
	I1202 12:39:54.726966   54652 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 12:39:54.732866   54652 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 12:39:54.732908   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1202 12:39:56.410348   54652 crio.go:462] duration metric: took 1.683421021s to copy over tarball
	I1202 12:39:56.410455   54652 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 12:39:59.031883   54652 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.621380155s)
	I1202 12:39:59.031921   54652 crio.go:469] duration metric: took 2.621536504s to extract the tarball
	I1202 12:39:59.031931   54652 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 12:39:59.076703   54652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:39:59.129032   54652 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1202 12:39:59.129059   54652 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 12:39:59.129136   54652 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:39:59.129152   54652 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:39:59.129176   54652 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:39:59.129224   54652 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1202 12:39:59.129238   54652 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1202 12:39:59.129195   54652 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:39:59.129200   54652 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:39:59.129221   54652 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1202 12:39:59.130774   54652 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1202 12:39:59.130793   54652 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:39:59.130860   54652 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1202 12:39:59.131037   54652 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:39:59.131059   54652 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:39:59.131066   54652 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:39:59.131059   54652 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1202 12:39:59.131134   54652 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:39:59.271913   54652 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:39:59.279778   54652 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1202 12:39:59.294301   54652 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:39:59.306952   54652 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1202 12:39:59.310000   54652 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:39:59.320786   54652 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1202 12:39:59.329166   54652 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:39:59.367110   54652 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1202 12:39:59.367160   54652 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:39:59.367216   54652 ssh_runner.go:195] Run: which crictl
	I1202 12:39:59.378927   54652 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1202 12:39:59.378960   54652 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1202 12:39:59.378998   54652 ssh_runner.go:195] Run: which crictl
	I1202 12:39:59.457633   54652 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1202 12:39:59.457675   54652 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1202 12:39:59.457720   54652 ssh_runner.go:195] Run: which crictl
	I1202 12:39:59.458725   54652 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1202 12:39:59.458760   54652 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:39:59.458804   54652 ssh_runner.go:195] Run: which crictl
	I1202 12:39:59.475597   54652 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1202 12:39:59.475641   54652 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:39:59.475697   54652 ssh_runner.go:195] Run: which crictl
	I1202 12:39:59.480179   54652 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1202 12:39:59.480219   54652 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1202 12:39:59.480291   54652 ssh_runner.go:195] Run: which crictl
	I1202 12:39:59.481200   54652 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1202 12:39:59.481238   54652 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:39:59.481242   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:39:59.481277   54652 ssh_runner.go:195] Run: which crictl
	I1202 12:39:59.481320   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1202 12:39:59.481328   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1202 12:39:59.481386   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:39:59.485486   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1202 12:39:59.485536   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:39:59.611042   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:39:59.611162   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1202 12:39:59.611259   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:39:59.618850   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1202 12:39:59.639143   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:39:59.671875   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1202 12:39:59.671967   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:39:59.763101   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1202 12:39:59.763280   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:39:59.785465   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1202 12:39:59.785712   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:39:59.820994   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:39:59.869106   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1202 12:39:59.869252   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:39:59.890165   54652 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1202 12:39:59.960647   54652 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1202 12:39:59.960865   54652 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:39:59.973236   54652 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1202 12:40:00.002477   54652 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1202 12:40:00.002557   54652 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1202 12:40:00.002586   54652 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1202 12:40:00.026658   54652 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1202 12:40:00.092981   54652 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:40:00.234940   54652 cache_images.go:92] duration metric: took 1.105860311s to LoadCachedImages
	W1202 12:40:00.235025   54652 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1202 12:40:00.235041   54652 kubeadm.go:934] updating node { 192.168.50.171 8443 v1.20.0 crio true true} ...
	I1202 12:40:00.235167   54652 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-666766 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-666766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 12:40:00.235253   54652 ssh_runner.go:195] Run: crio config
	I1202 12:40:00.290591   54652 cni.go:84] Creating CNI manager for ""
	I1202 12:40:00.290613   54652 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:40:00.290624   54652 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 12:40:00.290641   54652 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.171 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-666766 NodeName:old-k8s-version-666766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1202 12:40:00.290756   54652 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-666766"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 12:40:00.290826   54652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1202 12:40:00.305304   54652 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 12:40:00.305367   54652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 12:40:00.317965   54652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1202 12:40:00.338986   54652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 12:40:00.355916   54652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1202 12:40:00.374970   54652 ssh_runner.go:195] Run: grep 192.168.50.171	control-plane.minikube.internal$ /etc/hosts
	I1202 12:40:00.378948   54652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:40:00.391734   54652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:40:00.522700   54652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:40:00.539827   54652 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766 for IP: 192.168.50.171
	I1202 12:40:00.539852   54652 certs.go:194] generating shared ca certs ...
	I1202 12:40:00.539874   54652 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:40:00.540040   54652 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 12:40:00.540103   54652 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 12:40:00.540121   54652 certs.go:256] generating profile certs ...
	I1202 12:40:00.540190   54652 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.key
	I1202 12:40:00.540208   54652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt with IP's: []
	I1202 12:40:00.879155   54652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt ...
	I1202 12:40:00.879181   54652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: {Name:mk82c926270b46b7f6f8c2c8e1e2cafd1ad6c9bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:40:00.879362   54652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.key ...
	I1202 12:40:00.879381   54652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.key: {Name:mk23b55d227fdf9352bfd589fcfc71fabbf69eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:40:00.879488   54652 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.key.ee2023be
	I1202 12:40:00.879520   54652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.crt.ee2023be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.171]
	I1202 12:40:01.200134   54652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.crt.ee2023be ...
	I1202 12:40:01.200175   54652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.crt.ee2023be: {Name:mkbd1a84571038b05b9932dbad458041e63b8a85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:40:01.200395   54652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.key.ee2023be ...
	I1202 12:40:01.200416   54652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.key.ee2023be: {Name:mkec6f58854d380f421f2d5d0ff080dd657071dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:40:01.200526   54652 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.crt.ee2023be -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.crt
	I1202 12:40:01.200632   54652 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.key.ee2023be -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.key
	I1202 12:40:01.200707   54652 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/proxy-client.key
	I1202 12:40:01.200727   54652 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/proxy-client.crt with IP's: []
	I1202 12:40:01.400506   54652 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/proxy-client.crt ...
	I1202 12:40:01.400536   54652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/proxy-client.crt: {Name:mk24e48a2309140170168f0bbee57e8dc9ac9789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:40:01.400706   54652 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/proxy-client.key ...
	I1202 12:40:01.400722   54652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/proxy-client.key: {Name:mk86a5b23bcf7d8fcd4ba07fa492b75e63f75de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:40:01.400904   54652 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 12:40:01.400952   54652 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 12:40:01.400967   54652 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 12:40:01.400992   54652 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 12:40:01.401063   54652 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 12:40:01.401096   54652 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 12:40:01.401140   54652 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:40:01.401670   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 12:40:01.436129   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 12:40:01.469565   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 12:40:01.506610   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 12:40:01.540200   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 12:40:01.567432   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 12:40:01.590664   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 12:40:01.616925   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 12:40:01.640241   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 12:40:01.663761   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 12:40:01.688043   54652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 12:40:01.711508   54652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 12:40:01.728499   54652 ssh_runner.go:195] Run: openssl version
	I1202 12:40:01.734638   54652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 12:40:01.745506   54652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 12:40:01.749864   54652 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:40:01.749917   54652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 12:40:01.755699   54652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 12:40:01.766026   54652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 12:40:01.776117   54652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 12:40:01.780722   54652 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:40:01.780770   54652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 12:40:01.786485   54652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 12:40:01.797344   54652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 12:40:01.808205   54652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:40:01.812678   54652 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:40:01.812718   54652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:40:01.818776   54652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 12:40:01.829438   54652 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:40:01.833546   54652 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 12:40:01.833593   54652 kubeadm.go:392] StartCluster: {Name:old-k8s-version-666766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-666766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:40:01.833665   54652 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 12:40:01.833713   54652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:40:01.878698   54652 cri.go:89] found id: ""
	I1202 12:40:01.878759   54652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 12:40:01.889035   54652 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:40:01.898561   54652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:40:01.908202   54652 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:40:01.908217   54652 kubeadm.go:157] found existing configuration files:
	
	I1202 12:40:01.908279   54652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:40:01.917391   54652 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:40:01.917430   54652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:40:01.926935   54652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:40:01.936014   54652 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:40:01.936070   54652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:40:01.945143   54652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:40:01.954171   54652 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:40:01.954225   54652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:40:01.963490   54652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:40:01.972078   54652 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:40:01.972126   54652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:40:01.981033   54652 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:40:02.108511   54652 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:40:02.108590   54652 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:40:02.263339   54652 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:40:02.263526   54652 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:40:02.263684   54652 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:40:02.468446   54652 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:40:02.471211   54652 out.go:235]   - Generating certificates and keys ...
	I1202 12:40:02.471323   54652 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:40:02.471414   54652 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:40:02.613446   54652 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 12:40:02.729809   54652 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 12:40:02.837900   54652 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 12:40:03.226552   54652 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 12:40:03.453349   54652 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 12:40:03.453585   54652 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-666766] and IPs [192.168.50.171 127.0.0.1 ::1]
	I1202 12:40:03.549212   54652 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 12:40:03.549545   54652 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-666766] and IPs [192.168.50.171 127.0.0.1 ::1]
	I1202 12:40:03.664101   54652 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 12:40:03.734878   54652 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 12:40:03.892704   54652 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 12:40:03.892981   54652 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:40:04.032562   54652 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:40:04.130621   54652 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:40:04.538616   54652 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:40:04.802804   54652 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:40:04.819499   54652 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:40:04.820984   54652 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:40:04.821084   54652 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:40:04.950073   54652 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:40:04.952064   54652 out.go:235]   - Booting up control plane ...
	I1202 12:40:04.952217   54652 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:40:04.957203   54652 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:40:04.958114   54652 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:40:04.958932   54652 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:40:04.965474   54652 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:40:44.957783   54652 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:40:44.958940   54652 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:40:44.959204   54652 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:40:49.959595   54652 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:40:49.959838   54652 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:40:59.959134   54652 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:40:59.959378   54652 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:41:19.958783   54652 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:41:19.959000   54652 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:41:59.959915   54652 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:41:59.960223   54652 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:41:59.960294   54652 kubeadm.go:310] 
	I1202 12:41:59.960372   54652 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:41:59.960426   54652 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:41:59.960458   54652 kubeadm.go:310] 
	I1202 12:41:59.960519   54652 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:41:59.960565   54652 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:41:59.960719   54652 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:41:59.960740   54652 kubeadm.go:310] 
	I1202 12:41:59.960891   54652 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:41:59.960942   54652 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:41:59.960984   54652 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:41:59.960994   54652 kubeadm.go:310] 
	I1202 12:41:59.961145   54652 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:41:59.961263   54652 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:41:59.961275   54652 kubeadm.go:310] 
	I1202 12:41:59.961437   54652 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:41:59.961551   54652 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:41:59.961662   54652 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:41:59.961759   54652 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:41:59.961771   54652 kubeadm.go:310] 
	I1202 12:41:59.963100   54652 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:41:59.963239   54652 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:41:59.963335   54652 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1202 12:41:59.963461   54652 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-666766] and IPs [192.168.50.171 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-666766] and IPs [192.168.50.171 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-666766] and IPs [192.168.50.171 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-666766] and IPs [192.168.50.171 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 12:41:59.963502   54652 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:42:00.430846   54652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:42:00.450983   54652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:42:00.464490   54652 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:42:00.464511   54652 kubeadm.go:157] found existing configuration files:
	
	I1202 12:42:00.464567   54652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:42:00.475037   54652 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:42:00.475091   54652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:42:00.485141   54652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:42:00.494344   54652 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:42:00.494400   54652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:42:00.503891   54652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:42:00.513204   54652 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:42:00.513257   54652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:42:00.522652   54652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:42:00.531696   54652 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:42:00.531746   54652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:42:00.541084   54652 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:42:00.756695   54652 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:43:57.205241   54652 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:43:57.205375   54652 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:43:57.206828   54652 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:43:57.206895   54652 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:43:57.206987   54652 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:43:57.207113   54652 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:43:57.207211   54652 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:43:57.207273   54652 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:43:57.208909   54652 out.go:235]   - Generating certificates and keys ...
	I1202 12:43:57.208986   54652 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:43:57.209055   54652 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:43:57.209143   54652 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:43:57.209202   54652 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:43:57.209272   54652 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:43:57.209320   54652 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:43:57.209374   54652 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:43:57.209440   54652 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:43:57.209543   54652 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:43:57.209622   54652 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:43:57.209659   54652 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:43:57.209713   54652 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:43:57.209758   54652 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:43:57.209844   54652 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:43:57.209913   54652 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:43:57.209963   54652 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:43:57.210050   54652 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:43:57.210123   54652 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:43:57.210177   54652 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:43:57.210235   54652 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:43:57.212144   54652 out.go:235]   - Booting up control plane ...
	I1202 12:43:57.212265   54652 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:43:57.212345   54652 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:43:57.212409   54652 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:43:57.212509   54652 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:43:57.212682   54652 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:43:57.212760   54652 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:43:57.212851   54652 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:43:57.213052   54652 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:43:57.213150   54652 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:43:57.213341   54652 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:43:57.213417   54652 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:43:57.213622   54652 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:43:57.213690   54652 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:43:57.213841   54652 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:43:57.213901   54652 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:43:57.214063   54652 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:43:57.214077   54652 kubeadm.go:310] 
	I1202 12:43:57.214112   54652 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:43:57.214151   54652 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:43:57.214161   54652 kubeadm.go:310] 
	I1202 12:43:57.214189   54652 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:43:57.214218   54652 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:43:57.214339   54652 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:43:57.214350   54652 kubeadm.go:310] 
	I1202 12:43:57.214473   54652 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:43:57.214522   54652 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:43:57.214576   54652 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:43:57.214586   54652 kubeadm.go:310] 
	I1202 12:43:57.214693   54652 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:43:57.214770   54652 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:43:57.214776   54652 kubeadm.go:310] 
	I1202 12:43:57.214874   54652 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:43:57.214960   54652 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:43:57.215069   54652 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:43:57.215169   54652 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:43:57.215230   54652 kubeadm.go:310] 
	I1202 12:43:57.215249   54652 kubeadm.go:394] duration metric: took 3m55.381650882s to StartCluster
	I1202 12:43:57.215294   54652 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:43:57.215353   54652 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:43:57.259081   54652 cri.go:89] found id: ""
	I1202 12:43:57.259100   54652 logs.go:282] 0 containers: []
	W1202 12:43:57.259110   54652 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:43:57.259118   54652 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:43:57.259189   54652 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:43:57.292441   54652 cri.go:89] found id: ""
	I1202 12:43:57.292460   54652 logs.go:282] 0 containers: []
	W1202 12:43:57.292468   54652 logs.go:284] No container was found matching "etcd"
	I1202 12:43:57.292475   54652 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:43:57.292525   54652 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:43:57.334376   54652 cri.go:89] found id: ""
	I1202 12:43:57.334393   54652 logs.go:282] 0 containers: []
	W1202 12:43:57.334400   54652 logs.go:284] No container was found matching "coredns"
	I1202 12:43:57.334405   54652 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:43:57.334453   54652 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:43:57.369488   54652 cri.go:89] found id: ""
	I1202 12:43:57.369512   54652 logs.go:282] 0 containers: []
	W1202 12:43:57.369521   54652 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:43:57.369528   54652 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:43:57.369588   54652 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:43:57.401389   54652 cri.go:89] found id: ""
	I1202 12:43:57.401412   54652 logs.go:282] 0 containers: []
	W1202 12:43:57.401419   54652 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:43:57.401426   54652 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:43:57.401479   54652 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:43:57.433343   54652 cri.go:89] found id: ""
	I1202 12:43:57.433368   54652 logs.go:282] 0 containers: []
	W1202 12:43:57.433384   54652 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:43:57.433392   54652 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:43:57.433443   54652 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:43:57.465299   54652 cri.go:89] found id: ""
	I1202 12:43:57.465317   54652 logs.go:282] 0 containers: []
	W1202 12:43:57.465325   54652 logs.go:284] No container was found matching "kindnet"
	I1202 12:43:57.465333   54652 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:43:57.465342   54652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:43:57.583742   54652 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:43:57.583760   54652 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:43:57.583771   54652 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:43:57.692959   54652 logs.go:123] Gathering logs for container status ...
	I1202 12:43:57.692988   54652 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:43:57.730521   54652 logs.go:123] Gathering logs for kubelet ...
	I1202 12:43:57.730549   54652 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:43:57.781536   54652 logs.go:123] Gathering logs for dmesg ...
	I1202 12:43:57.781561   54652 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1202 12:43:57.794004   54652 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1202 12:43:57.794050   54652 out.go:270] * 
	* 
	W1202 12:43:57.794110   54652 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:43:57.794134   54652 out.go:270] * 
	* 
	W1202 12:43:57.794938   54652 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 12:43:57.798175   54652 out.go:201] 
	W1202 12:43:57.799226   54652 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:43:57.799283   54652 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 12:43:57.799315   54652 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 12:43:57.800627   54652 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-666766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 6 (220.540106ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:43:58.066931   58384 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-666766" does not appear in /home/jenkins/minikube-integration/20033-6257/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-666766" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (270.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-658679 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-658679 --alsologtostderr -v=3: exit status 82 (2m0.505853935s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-658679"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 12:41:09.841592   55982 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:41:09.841706   55982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:41:09.841715   55982 out.go:358] Setting ErrFile to fd 2...
	I1202 12:41:09.841718   55982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:41:09.841864   55982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:41:09.842093   55982 out.go:352] Setting JSON to false
	I1202 12:41:09.842175   55982 mustload.go:65] Loading cluster: no-preload-658679
	I1202 12:41:09.842532   55982 config.go:182] Loaded profile config "no-preload-658679": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:41:09.842595   55982 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/config.json ...
	I1202 12:41:09.842767   55982 mustload.go:65] Loading cluster: no-preload-658679
	I1202 12:41:09.842863   55982 config.go:182] Loaded profile config "no-preload-658679": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:41:09.842888   55982 stop.go:39] StopHost: no-preload-658679
	I1202 12:41:09.843339   55982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:41:09.843389   55982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:41:09.860954   55982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35239
	I1202 12:41:09.861457   55982 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:41:09.861991   55982 main.go:141] libmachine: Using API Version  1
	I1202 12:41:09.862015   55982 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:41:09.862380   55982 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:41:09.864683   55982 out.go:177] * Stopping node "no-preload-658679"  ...
	I1202 12:41:09.865726   55982 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1202 12:41:09.865755   55982 main.go:141] libmachine: (no-preload-658679) Calling .DriverName
	I1202 12:41:09.865942   55982 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1202 12:41:09.865968   55982 main.go:141] libmachine: (no-preload-658679) Calling .GetSSHHostname
	I1202 12:41:09.868791   55982 main.go:141] libmachine: (no-preload-658679) DBG | domain no-preload-658679 has defined MAC address 52:54:00:b1:82:c2 in network mk-no-preload-658679
	I1202 12:41:09.869151   55982 main.go:141] libmachine: (no-preload-658679) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:82:c2", ip: ""} in network mk-no-preload-658679: {Iface:virbr1 ExpiryTime:2024-12-02 13:40:09 +0000 UTC Type:0 Mac:52:54:00:b1:82:c2 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:no-preload-658679 Clientid:01:52:54:00:b1:82:c2}
	I1202 12:41:09.869184   55982 main.go:141] libmachine: (no-preload-658679) DBG | domain no-preload-658679 has defined IP address 192.168.61.205 and MAC address 52:54:00:b1:82:c2 in network mk-no-preload-658679
	I1202 12:41:09.869323   55982 main.go:141] libmachine: (no-preload-658679) Calling .GetSSHPort
	I1202 12:41:09.869487   55982 main.go:141] libmachine: (no-preload-658679) Calling .GetSSHKeyPath
	I1202 12:41:09.869654   55982 main.go:141] libmachine: (no-preload-658679) Calling .GetSSHUsername
	I1202 12:41:09.869770   55982 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/no-preload-658679/id_rsa Username:docker}
	I1202 12:41:09.973540   55982 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1202 12:41:10.039953   55982 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1202 12:41:10.104213   55982 main.go:141] libmachine: Stopping "no-preload-658679"...
	I1202 12:41:10.104268   55982 main.go:141] libmachine: (no-preload-658679) Calling .GetState
	I1202 12:41:10.105941   55982 main.go:141] libmachine: (no-preload-658679) Calling .Stop
	I1202 12:41:10.109575   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 0/120
	I1202 12:41:11.110921   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 1/120
	I1202 12:41:12.112196   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 2/120
	I1202 12:41:13.113852   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 3/120
	I1202 12:41:14.115225   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 4/120
	I1202 12:41:15.117022   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 5/120
	I1202 12:41:16.118264   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 6/120
	I1202 12:41:17.119464   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 7/120
	I1202 12:41:18.120837   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 8/120
	I1202 12:41:19.122552   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 9/120
	I1202 12:41:20.123819   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 10/120
	I1202 12:41:21.125157   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 11/120
	I1202 12:41:22.126520   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 12/120
	I1202 12:41:23.127875   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 13/120
	I1202 12:41:24.129342   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 14/120
	I1202 12:41:25.131304   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 15/120
	I1202 12:41:26.132876   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 16/120
	I1202 12:41:27.134268   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 17/120
	I1202 12:41:28.135763   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 18/120
	I1202 12:41:29.137389   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 19/120
	I1202 12:41:30.139471   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 20/120
	I1202 12:41:31.140890   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 21/120
	I1202 12:41:32.142954   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 22/120
	I1202 12:41:33.144134   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 23/120
	I1202 12:41:34.145446   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 24/120
	I1202 12:41:35.147322   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 25/120
	I1202 12:41:36.149822   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 26/120
	I1202 12:41:37.151255   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 27/120
	I1202 12:41:38.152657   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 28/120
	I1202 12:41:39.154875   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 29/120
	I1202 12:41:40.157007   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 30/120
	I1202 12:41:41.158449   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 31/120
	I1202 12:41:42.159890   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 32/120
	I1202 12:41:43.161670   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 33/120
	I1202 12:41:44.163089   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 34/120
	I1202 12:41:45.165424   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 35/120
	I1202 12:41:46.166636   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 36/120
	I1202 12:41:47.168028   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 37/120
	I1202 12:41:48.170049   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 38/120
	I1202 12:41:49.171529   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 39/120
	I1202 12:41:50.173802   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 40/120
	I1202 12:41:51.175251   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 41/120
	I1202 12:41:52.176480   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 42/120
	I1202 12:41:53.177706   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 43/120
	I1202 12:41:54.178992   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 44/120
	I1202 12:41:55.180852   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 45/120
	I1202 12:41:56.182306   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 46/120
	I1202 12:41:57.183655   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 47/120
	I1202 12:41:58.185138   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 48/120
	I1202 12:41:59.186293   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 49/120
	I1202 12:42:00.188118   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 50/120
	I1202 12:42:01.189674   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 51/120
	I1202 12:42:02.191261   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 52/120
	I1202 12:42:03.192713   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 53/120
	I1202 12:42:04.194543   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 54/120
	I1202 12:42:05.196096   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 55/120
	I1202 12:42:06.197567   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 56/120
	I1202 12:42:07.198937   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 57/120
	I1202 12:42:08.200775   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 58/120
	I1202 12:42:09.203232   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 59/120
	I1202 12:42:10.205385   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 60/120
	I1202 12:42:11.206962   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 61/120
	I1202 12:42:12.208632   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 62/120
	I1202 12:42:13.210749   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 63/120
	I1202 12:42:14.212047   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 64/120
	I1202 12:42:15.213873   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 65/120
	I1202 12:42:16.215533   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 66/120
	I1202 12:42:17.217783   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 67/120
	I1202 12:42:18.219385   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 68/120
	I1202 12:42:19.220870   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 69/120
	I1202 12:42:20.222785   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 70/120
	I1202 12:42:21.224034   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 71/120
	I1202 12:42:22.225408   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 72/120
	I1202 12:42:23.226805   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 73/120
	I1202 12:42:24.228316   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 74/120
	I1202 12:42:25.229713   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 75/120
	I1202 12:42:26.231029   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 76/120
	I1202 12:42:27.232218   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 77/120
	I1202 12:42:28.233436   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 78/120
	I1202 12:42:29.234717   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 79/120
	I1202 12:42:30.236729   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 80/120
	I1202 12:42:31.238001   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 81/120
	I1202 12:42:32.239474   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 82/120
	I1202 12:42:33.240654   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 83/120
	I1202 12:42:34.241947   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 84/120
	I1202 12:42:35.243970   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 85/120
	I1202 12:42:36.245935   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 86/120
	I1202 12:42:37.247557   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 87/120
	I1202 12:42:38.248980   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 88/120
	I1202 12:42:39.250868   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 89/120
	I1202 12:42:40.252655   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 90/120
	I1202 12:42:41.254851   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 91/120
	I1202 12:42:42.256276   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 92/120
	I1202 12:42:43.257558   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 93/120
	I1202 12:42:44.258639   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 94/120
	I1202 12:42:45.260542   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 95/120
	I1202 12:42:46.261844   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 96/120
	I1202 12:42:47.263379   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 97/120
	I1202 12:42:48.264688   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 98/120
	I1202 12:42:49.266708   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 99/120
	I1202 12:42:50.268836   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 100/120
	I1202 12:42:51.270074   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 101/120
	I1202 12:42:52.271217   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 102/120
	I1202 12:42:53.272319   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 103/120
	I1202 12:42:54.273515   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 104/120
	I1202 12:42:55.275173   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 105/120
	I1202 12:42:56.276598   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 106/120
	I1202 12:42:57.278677   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 107/120
	I1202 12:42:58.280084   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 108/120
	I1202 12:42:59.281348   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 109/120
	I1202 12:43:00.283320   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 110/120
	I1202 12:43:01.284678   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 111/120
	I1202 12:43:02.285952   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 112/120
	I1202 12:43:03.287292   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 113/120
	I1202 12:43:04.288438   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 114/120
	I1202 12:43:05.290472   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 115/120
	I1202 12:43:06.291968   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 116/120
	I1202 12:43:07.293359   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 117/120
	I1202 12:43:08.294784   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 118/120
	I1202 12:43:09.296065   55982 main.go:141] libmachine: (no-preload-658679) Waiting for machine to stop 119/120
	I1202 12:43:10.296623   55982 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1202 12:43:10.296671   55982 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1202 12:43:10.298760   55982 out.go:201] 
	W1202 12:43:10.299732   55982 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1202 12:43:10.299743   55982 out.go:270] * 
	* 
	W1202 12:43:10.303149   55982 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 12:43:10.304222   55982 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-658679 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658679 -n no-preload-658679
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658679 -n no-preload-658679: exit status 3 (18.453080358s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:43:28.756516   57580 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host
	E1202 12:43:28.756547   57580 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-658679" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-953044 --alsologtostderr -v=3
E1202 12:42:49.238218   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-953044 --alsologtostderr -v=3: exit status 82 (2m0.487072303s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-953044"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 12:42:47.519047   57256 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:42:47.519162   57256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:42:47.519170   57256 out.go:358] Setting ErrFile to fd 2...
	I1202 12:42:47.519174   57256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:42:47.519325   57256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:42:47.519524   57256 out.go:352] Setting JSON to false
	I1202 12:42:47.519592   57256 mustload.go:65] Loading cluster: embed-certs-953044
	I1202 12:42:47.519889   57256 config.go:182] Loaded profile config "embed-certs-953044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:42:47.519958   57256 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/embed-certs-953044/config.json ...
	I1202 12:42:47.520110   57256 mustload.go:65] Loading cluster: embed-certs-953044
	I1202 12:42:47.520215   57256 config.go:182] Loaded profile config "embed-certs-953044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:42:47.520264   57256 stop.go:39] StopHost: embed-certs-953044
	I1202 12:42:47.520691   57256 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:42:47.520737   57256 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:42:47.536086   57256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34831
	I1202 12:42:47.536605   57256 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:42:47.537271   57256 main.go:141] libmachine: Using API Version  1
	I1202 12:42:47.537307   57256 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:42:47.537622   57256 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:42:47.539713   57256 out.go:177] * Stopping node "embed-certs-953044"  ...
	I1202 12:42:47.540923   57256 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1202 12:42:47.540955   57256 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:42:47.541174   57256 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1202 12:42:47.541205   57256 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:42:47.544250   57256 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:42:47.544710   57256 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:41:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:42:47.544740   57256 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:42:47.544848   57256 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:42:47.545020   57256 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:42:47.545172   57256 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:42:47.545339   57256 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:42:47.637691   57256 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1202 12:42:47.696489   57256 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1202 12:42:47.760735   57256 main.go:141] libmachine: Stopping "embed-certs-953044"...
	I1202 12:42:47.760769   57256 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:42:47.762399   57256 main.go:141] libmachine: (embed-certs-953044) Calling .Stop
	I1202 12:42:47.766156   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 0/120
	I1202 12:42:48.767371   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 1/120
	I1202 12:42:49.768711   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 2/120
	I1202 12:42:50.770819   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 3/120
	I1202 12:42:51.772223   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 4/120
	I1202 12:42:52.774180   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 5/120
	I1202 12:42:53.775483   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 6/120
	I1202 12:42:54.776752   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 7/120
	I1202 12:42:55.778214   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 8/120
	I1202 12:42:56.779400   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 9/120
	I1202 12:42:57.781782   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 10/120
	I1202 12:42:58.783214   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 11/120
	I1202 12:42:59.784622   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 12/120
	I1202 12:43:00.786604   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 13/120
	I1202 12:43:01.787825   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 14/120
	I1202 12:43:02.789525   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 15/120
	I1202 12:43:03.791712   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 16/120
	I1202 12:43:04.793138   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 17/120
	I1202 12:43:05.794813   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 18/120
	I1202 12:43:06.796163   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 19/120
	I1202 12:43:07.798391   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 20/120
	I1202 12:43:08.799644   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 21/120
	I1202 12:43:09.801074   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 22/120
	I1202 12:43:10.802456   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 23/120
	I1202 12:43:11.803868   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 24/120
	I1202 12:43:12.805802   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 25/120
	I1202 12:43:13.807227   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 26/120
	I1202 12:43:14.808661   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 27/120
	I1202 12:43:15.810002   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 28/120
	I1202 12:43:16.811462   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 29/120
	I1202 12:43:17.813043   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 30/120
	I1202 12:43:18.814305   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 31/120
	I1202 12:43:19.815603   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 32/120
	I1202 12:43:20.816835   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 33/120
	I1202 12:43:21.818254   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 34/120
	I1202 12:43:22.820200   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 35/120
	I1202 12:43:23.821519   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 36/120
	I1202 12:43:24.823515   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 37/120
	I1202 12:43:25.824924   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 38/120
	I1202 12:43:26.826972   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 39/120
	I1202 12:43:27.828943   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 40/120
	I1202 12:43:28.830886   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 41/120
	I1202 12:43:29.832154   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 42/120
	I1202 12:43:30.833357   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 43/120
	I1202 12:43:31.834464   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 44/120
	I1202 12:43:32.836057   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 45/120
	I1202 12:43:33.837469   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 46/120
	I1202 12:43:34.839023   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 47/120
	I1202 12:43:35.840368   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 48/120
	I1202 12:43:36.841610   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 49/120
	I1202 12:43:37.843618   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 50/120
	I1202 12:43:38.844881   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 51/120
	I1202 12:43:39.846616   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 52/120
	I1202 12:43:40.848122   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 53/120
	I1202 12:43:41.849473   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 54/120
	I1202 12:43:42.851344   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 55/120
	I1202 12:43:43.852588   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 56/120
	I1202 12:43:44.854655   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 57/120
	I1202 12:43:45.855997   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 58/120
	I1202 12:43:46.857499   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 59/120
	I1202 12:43:47.859617   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 60/120
	I1202 12:43:48.861044   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 61/120
	I1202 12:43:49.862355   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 62/120
	I1202 12:43:50.863585   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 63/120
	I1202 12:43:51.864853   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 64/120
	I1202 12:43:52.866695   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 65/120
	I1202 12:43:53.867956   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 66/120
	I1202 12:43:54.869212   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 67/120
	I1202 12:43:55.870426   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 68/120
	I1202 12:43:56.871689   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 69/120
	I1202 12:43:57.873743   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 70/120
	I1202 12:43:58.874966   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 71/120
	I1202 12:43:59.876341   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 72/120
	I1202 12:44:00.877592   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 73/120
	I1202 12:44:01.878842   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 74/120
	I1202 12:44:02.880588   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 75/120
	I1202 12:44:03.881912   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 76/120
	I1202 12:44:04.883305   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 77/120
	I1202 12:44:05.884550   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 78/120
	I1202 12:44:06.885745   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 79/120
	I1202 12:44:07.887809   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 80/120
	I1202 12:44:08.889114   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 81/120
	I1202 12:44:09.890345   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 82/120
	I1202 12:44:10.891519   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 83/120
	I1202 12:44:11.893082   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 84/120
	I1202 12:44:12.895061   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 85/120
	I1202 12:44:13.896607   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 86/120
	I1202 12:44:14.898042   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 87/120
	I1202 12:44:15.899435   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 88/120
	I1202 12:44:16.900770   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 89/120
	I1202 12:44:17.902782   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 90/120
	I1202 12:44:18.904319   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 91/120
	I1202 12:44:19.905762   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 92/120
	I1202 12:44:20.907030   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 93/120
	I1202 12:44:21.908647   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 94/120
	I1202 12:44:22.910483   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 95/120
	I1202 12:44:23.911696   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 96/120
	I1202 12:44:24.913019   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 97/120
	I1202 12:44:25.914217   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 98/120
	I1202 12:44:26.915501   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 99/120
	I1202 12:44:27.917592   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 100/120
	I1202 12:44:28.918902   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 101/120
	I1202 12:44:29.920091   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 102/120
	I1202 12:44:30.921376   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 103/120
	I1202 12:44:31.922695   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 104/120
	I1202 12:44:32.924569   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 105/120
	I1202 12:44:33.925885   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 106/120
	I1202 12:44:34.927105   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 107/120
	I1202 12:44:35.928573   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 108/120
	I1202 12:44:36.930643   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 109/120
	I1202 12:44:37.932737   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 110/120
	I1202 12:44:38.934753   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 111/120
	I1202 12:44:39.936189   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 112/120
	I1202 12:44:40.937515   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 113/120
	I1202 12:44:41.938810   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 114/120
	I1202 12:44:42.940681   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 115/120
	I1202 12:44:43.941959   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 116/120
	I1202 12:44:44.943210   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 117/120
	I1202 12:44:45.944485   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 118/120
	I1202 12:44:46.945695   57256 main.go:141] libmachine: (embed-certs-953044) Waiting for machine to stop 119/120
	I1202 12:44:47.946199   57256 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1202 12:44:47.946258   57256 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1202 12:44:47.948082   57256 out.go:201] 
	W1202 12:44:47.949250   57256 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1202 12:44:47.949267   57256 out.go:270] * 
	* 
	W1202 12:44:47.952416   57256 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 12:44:47.953491   57256 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-953044 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-953044 -n embed-certs-953044
E1202 12:45:01.369914   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-953044 -n embed-certs-953044: exit status 3 (18.593954822s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:45:06.548514   58698 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host
	E1202 12:45:06.548534   58698 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-953044" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658679 -n no-preload-658679
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658679 -n no-preload-658679: exit status 3 (3.167685856s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:43:31.924614   57692 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host
	E1202 12:43:31.924635   57692 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-658679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-658679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.150557311s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-658679 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658679 -n no-preload-658679
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658679 -n no-preload-658679: exit status 3 (3.063291343s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:43:41.140629   57776 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host
	E1202 12:43:41.140650   57776 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.205:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-658679" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-666766 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-666766 create -f testdata/busybox.yaml: exit status 1 (41.258773ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-666766" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-666766 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 6 (228.780658ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:43:58.334125   58425 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-666766" does not appear in /home/jenkins/minikube-integration/20033-6257/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-666766" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 6 (224.538859ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:43:58.562916   58455 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-666766" does not appear in /home/jenkins/minikube-integration/20033-6257/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-666766" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (103.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-666766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-666766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m43.018534911s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-666766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-666766 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-666766 describe deploy/metrics-server -n kube-system: exit status 1 (43.27084ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-666766" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-666766 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 6 (225.134386ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:45:41.850325   59028 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-666766" does not appear in /home/jenkins/minikube-integration/20033-6257/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-666766" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (103.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-953044 -n embed-certs-953044
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-953044 -n embed-certs-953044: exit status 3 (3.167889077s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:45:09.716640   58793 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host
	E1202 12:45:09.716664   58793 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-953044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-953044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152597809s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-953044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-953044 -n embed-certs-953044
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-953044 -n embed-certs-953044: exit status 3 (3.06326s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:45:18.932581   58873 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host
	E1202 12:45:18.932600   58873 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.203:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-953044" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (704.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-666766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1202 12:47:49.237891   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:49:12.314174   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-666766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m42.913290663s)

                                                
                                                
-- stdout --
	* [old-k8s-version-666766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-666766" primary control-plane node in "old-k8s-version-666766" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-666766" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 12:45:48.360364   59162 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:45:48.360474   59162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:45:48.360483   59162 out.go:358] Setting ErrFile to fd 2...
	I1202 12:45:48.360487   59162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:45:48.360685   59162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:45:48.361341   59162 out.go:352] Setting JSON to false
	I1202 12:45:48.362308   59162 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5300,"bootTime":1733138248,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:45:48.362398   59162 start.go:139] virtualization: kvm guest
	I1202 12:45:48.364378   59162 out.go:177] * [old-k8s-version-666766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:45:48.365465   59162 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:45:48.365530   59162 notify.go:220] Checking for updates...
	I1202 12:45:48.367636   59162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:45:48.368883   59162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:45:48.369982   59162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:45:48.371064   59162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:45:48.372167   59162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:45:48.373552   59162 config.go:182] Loaded profile config "old-k8s-version-666766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1202 12:45:48.373910   59162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:45:48.373972   59162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:45:48.388570   59162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45663
	I1202 12:45:48.388994   59162 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:45:48.389500   59162 main.go:141] libmachine: Using API Version  1
	I1202 12:45:48.389520   59162 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:45:48.389858   59162 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:45:48.390059   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:45:48.391503   59162 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1202 12:45:48.392584   59162 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:45:48.392855   59162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:45:48.392885   59162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:45:48.406717   59162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36265
	I1202 12:45:48.406999   59162 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:45:48.407422   59162 main.go:141] libmachine: Using API Version  1
	I1202 12:45:48.407440   59162 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:45:48.407726   59162 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:45:48.407890   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:45:48.440408   59162 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 12:45:48.441478   59162 start.go:297] selected driver: kvm2
	I1202 12:45:48.441488   59162 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-666766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-666766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:45:48.441593   59162 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:45:48.442245   59162 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:45:48.442309   59162 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 12:45:48.455830   59162 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 12:45:48.456208   59162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:45:48.456269   59162 cni.go:84] Creating CNI manager for ""
	I1202 12:45:48.456323   59162 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:45:48.456366   59162 start.go:340] cluster config:
	{Name:old-k8s-version-666766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-666766 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:45:48.456471   59162 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:45:48.457977   59162 out.go:177] * Starting "old-k8s-version-666766" primary control-plane node in "old-k8s-version-666766" cluster
	I1202 12:45:48.458935   59162 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1202 12:45:48.458964   59162 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1202 12:45:48.458973   59162 cache.go:56] Caching tarball of preloaded images
	I1202 12:45:48.459026   59162 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 12:45:48.459035   59162 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1202 12:45:48.459116   59162 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/config.json ...
	I1202 12:45:48.459318   59162 start.go:360] acquireMachinesLock for old-k8s-version-666766: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:49:02.221184   59162 start.go:364] duration metric: took 3m13.76183246s to acquireMachinesLock for "old-k8s-version-666766"
	I1202 12:49:02.221251   59162 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:49:02.221259   59162 fix.go:54] fixHost starting: 
	I1202 12:49:02.221729   59162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:49:02.221783   59162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:49:02.241304   59162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46719
	I1202 12:49:02.241758   59162 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:49:02.242225   59162 main.go:141] libmachine: Using API Version  1
	I1202 12:49:02.242248   59162 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:49:02.242610   59162 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:49:02.242776   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:49:02.242886   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetState
	I1202 12:49:02.244559   59162 fix.go:112] recreateIfNeeded on old-k8s-version-666766: state=Stopped err=<nil>
	I1202 12:49:02.244601   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	W1202 12:49:02.244727   59162 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:49:02.246342   59162 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-666766" ...
	I1202 12:49:02.247458   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .Start
	I1202 12:49:02.247609   59162 main.go:141] libmachine: (old-k8s-version-666766) Ensuring networks are active...
	I1202 12:49:02.248326   59162 main.go:141] libmachine: (old-k8s-version-666766) Ensuring network default is active
	I1202 12:49:02.248696   59162 main.go:141] libmachine: (old-k8s-version-666766) Ensuring network mk-old-k8s-version-666766 is active
	I1202 12:49:02.249153   59162 main.go:141] libmachine: (old-k8s-version-666766) Getting domain xml...
	I1202 12:49:02.249827   59162 main.go:141] libmachine: (old-k8s-version-666766) Creating domain...
	I1202 12:49:03.660404   59162 main.go:141] libmachine: (old-k8s-version-666766) Waiting to get IP...
	I1202 12:49:03.661355   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:03.661811   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:03.661898   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:03.661795   60066 retry.go:31] will retry after 223.401147ms: waiting for machine to come up
	I1202 12:49:03.887553   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:03.888209   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:03.888248   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:03.888105   60066 retry.go:31] will retry after 236.830324ms: waiting for machine to come up
	I1202 12:49:04.126745   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:04.127247   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:04.127269   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:04.127200   60066 retry.go:31] will retry after 462.528149ms: waiting for machine to come up
	I1202 12:49:04.592108   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:04.592601   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:04.592632   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:04.592548   60066 retry.go:31] will retry after 537.391905ms: waiting for machine to come up
	I1202 12:49:05.131720   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:05.132094   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:05.132126   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:05.132052   60066 retry.go:31] will retry after 545.114846ms: waiting for machine to come up
	I1202 12:49:05.678576   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:05.678976   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:05.679032   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:05.678948   60066 retry.go:31] will retry after 714.279491ms: waiting for machine to come up
	I1202 12:49:06.394513   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:06.394895   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:06.394914   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:06.394836   60066 retry.go:31] will retry after 1.034186287s: waiting for machine to come up
	I1202 12:49:07.430325   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:07.430899   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:07.430929   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:07.430827   60066 retry.go:31] will retry after 1.225043731s: waiting for machine to come up
	I1202 12:49:08.657768   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:08.658261   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:08.658289   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:08.658223   60066 retry.go:31] will retry after 1.448023603s: waiting for machine to come up
	I1202 12:49:10.107497   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:10.108056   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:10.108087   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:10.107975   60066 retry.go:31] will retry after 2.053475716s: waiting for machine to come up
	I1202 12:49:12.164364   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:12.164897   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:12.164931   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:12.164846   60066 retry.go:31] will retry after 1.923670218s: waiting for machine to come up
	I1202 12:49:14.091550   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:14.092030   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:14.092059   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:14.091990   60066 retry.go:31] will retry after 2.621501325s: waiting for machine to come up
	I1202 12:49:16.715437   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:16.715764   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:16.715794   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:16.715732   60066 retry.go:31] will retry after 3.140035798s: waiting for machine to come up
	I1202 12:49:19.857267   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:19.857678   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | unable to find current IP address of domain old-k8s-version-666766 in network mk-old-k8s-version-666766
	I1202 12:49:19.857701   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | I1202 12:49:19.857625   60066 retry.go:31] will retry after 3.529080732s: waiting for machine to come up
	I1202 12:49:23.388206   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.388835   59162 main.go:141] libmachine: (old-k8s-version-666766) Found IP for machine: 192.168.50.171
	I1202 12:49:23.388879   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has current primary IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.388892   59162 main.go:141] libmachine: (old-k8s-version-666766) Reserving static IP address...
	I1202 12:49:23.389381   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "old-k8s-version-666766", mac: "52:54:00:79:ba:14", ip: "192.168.50.171"} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:23.389417   59162 main.go:141] libmachine: (old-k8s-version-666766) Reserved static IP address: 192.168.50.171
	I1202 12:49:23.389435   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | skip adding static IP to network mk-old-k8s-version-666766 - found existing host DHCP lease matching {name: "old-k8s-version-666766", mac: "52:54:00:79:ba:14", ip: "192.168.50.171"}
	I1202 12:49:23.389470   59162 main.go:141] libmachine: (old-k8s-version-666766) Waiting for SSH to be available...
	I1202 12:49:23.389490   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | Getting to WaitForSSH function...
	I1202 12:49:23.391644   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.391997   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:23.392022   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.392176   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | Using SSH client type: external
	I1202 12:49:23.392220   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa (-rw-------)
	I1202 12:49:23.392274   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 12:49:23.392294   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | About to run SSH command:
	I1202 12:49:23.392308   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | exit 0
	I1202 12:49:23.524918   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | SSH cmd err, output: <nil>: 
	I1202 12:49:23.525363   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetConfigRaw
	I1202 12:49:23.526082   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetIP
	I1202 12:49:23.528823   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.529209   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:23.529238   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.529531   59162 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/config.json ...
	I1202 12:49:23.529745   59162 machine.go:93] provisionDockerMachine start ...
	I1202 12:49:23.529769   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:49:23.529976   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:49:23.532181   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.532561   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:23.532588   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.532724   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:49:23.532946   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:23.533097   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:23.533222   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:49:23.533386   59162 main.go:141] libmachine: Using SSH client type: native
	I1202 12:49:23.533613   59162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1202 12:49:23.533630   59162 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:49:23.652278   59162 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1202 12:49:23.652309   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetMachineName
	I1202 12:49:23.652501   59162 buildroot.go:166] provisioning hostname "old-k8s-version-666766"
	I1202 12:49:23.652525   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetMachineName
	I1202 12:49:23.652692   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:49:23.655184   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.655538   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:23.655572   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.655638   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:49:23.655806   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:23.655948   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:23.656078   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:49:23.656250   59162 main.go:141] libmachine: Using SSH client type: native
	I1202 12:49:23.656416   59162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1202 12:49:23.656428   59162 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-666766 && echo "old-k8s-version-666766" | sudo tee /etc/hostname
	I1202 12:49:23.782791   59162 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-666766
	
	I1202 12:49:23.782819   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:49:23.785799   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.786189   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:23.786232   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.786370   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:49:23.786571   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:23.786747   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:23.786914   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:49:23.787118   59162 main.go:141] libmachine: Using SSH client type: native
	I1202 12:49:23.787292   59162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1202 12:49:23.787309   59162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-666766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-666766/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-666766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 12:49:23.910670   59162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:49:23.910711   59162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 12:49:23.910754   59162 buildroot.go:174] setting up certificates
	I1202 12:49:23.910769   59162 provision.go:84] configureAuth start
	I1202 12:49:23.910785   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetMachineName
	I1202 12:49:23.911014   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetIP
	I1202 12:49:23.913571   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.913939   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:23.913974   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.914080   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:49:23.916946   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.917386   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:23.917415   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:23.917581   59162 provision.go:143] copyHostCerts
	I1202 12:49:23.917634   59162 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 12:49:23.917645   59162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:49:23.917696   59162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 12:49:23.917821   59162 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 12:49:23.917831   59162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:49:23.917858   59162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 12:49:23.917912   59162 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 12:49:23.917919   59162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:49:23.917935   59162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 12:49:23.917983   59162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-666766 san=[127.0.0.1 192.168.50.171 localhost minikube old-k8s-version-666766]
	I1202 12:49:24.074358   59162 provision.go:177] copyRemoteCerts
	I1202 12:49:24.074418   59162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 12:49:24.074441   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:49:24.077244   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.077602   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:24.077643   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.077797   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:49:24.077981   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:24.078123   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:49:24.078292   59162 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa Username:docker}
	I1202 12:49:24.161949   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 12:49:24.189333   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1202 12:49:24.213101   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 12:49:24.236615   59162 provision.go:87] duration metric: took 325.834455ms to configureAuth
	I1202 12:49:24.236634   59162 buildroot.go:189] setting minikube options for container-runtime
	I1202 12:49:24.236817   59162 config.go:182] Loaded profile config "old-k8s-version-666766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1202 12:49:24.236899   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:49:24.239435   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.239760   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:24.239792   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.239942   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:49:24.240141   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:24.240302   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:24.240432   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:49:24.240599   59162 main.go:141] libmachine: Using SSH client type: native
	I1202 12:49:24.240804   59162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1202 12:49:24.240827   59162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 12:49:24.478234   59162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 12:49:24.478262   59162 machine.go:96] duration metric: took 948.502298ms to provisionDockerMachine
	I1202 12:49:24.478277   59162 start.go:293] postStartSetup for "old-k8s-version-666766" (driver="kvm2")
	I1202 12:49:24.478290   59162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 12:49:24.478318   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:49:24.478589   59162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 12:49:24.478620   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:49:24.481395   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.481754   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:24.481780   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.481931   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:49:24.482100   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:24.482245   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:49:24.482363   59162 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa Username:docker}
	I1202 12:49:24.566482   59162 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 12:49:24.570464   59162 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 12:49:24.570492   59162 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 12:49:24.570558   59162 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 12:49:24.570640   59162 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 12:49:24.570728   59162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 12:49:24.579759   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:49:24.603967   59162 start.go:296] duration metric: took 125.67804ms for postStartSetup
	I1202 12:49:24.604008   59162 fix.go:56] duration metric: took 22.382749465s for fixHost
	I1202 12:49:24.604031   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:49:24.606630   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.606958   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:24.606979   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.607156   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:49:24.607346   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:24.607575   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:24.607721   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:49:24.607860   59162 main.go:141] libmachine: Using SSH client type: native
	I1202 12:49:24.608030   59162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.171 22 <nil> <nil>}
	I1202 12:49:24.608042   59162 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 12:49:24.720733   59162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733143764.693753848
	
	I1202 12:49:24.720756   59162 fix.go:216] guest clock: 1733143764.693753848
	I1202 12:49:24.720763   59162 fix.go:229] Guest: 2024-12-02 12:49:24.693753848 +0000 UTC Remote: 2024-12-02 12:49:24.604012985 +0000 UTC m=+216.279179036 (delta=89.740863ms)
	I1202 12:49:24.720793   59162 fix.go:200] guest clock delta is within tolerance: 89.740863ms
	I1202 12:49:24.720797   59162 start.go:83] releasing machines lock for "old-k8s-version-666766", held for 22.49957215s
	I1202 12:49:24.720822   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:49:24.721056   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetIP
	I1202 12:49:24.723720   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.724101   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:24.724131   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.724276   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:49:24.724764   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:49:24.724935   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .DriverName
	I1202 12:49:24.725027   59162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 12:49:24.725062   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:49:24.725124   59162 ssh_runner.go:195] Run: cat /version.json
	I1202 12:49:24.725146   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHHostname
	I1202 12:49:24.727863   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.728146   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.728180   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:24.728205   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.728368   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:49:24.728553   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:24.728594   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:24.728621   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:24.728718   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:49:24.728849   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHPort
	I1202 12:49:24.728940   59162 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa Username:docker}
	I1202 12:49:24.728991   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHKeyPath
	I1202 12:49:24.729123   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetSSHUsername
	I1202 12:49:24.729257   59162 sshutil.go:53] new ssh client: &{IP:192.168.50.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/old-k8s-version-666766/id_rsa Username:docker}
	I1202 12:49:24.808735   59162 ssh_runner.go:195] Run: systemctl --version
	I1202 12:49:24.834462   59162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 12:49:24.981073   59162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 12:49:24.989035   59162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 12:49:24.989100   59162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 12:49:25.005284   59162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 12:49:25.005303   59162 start.go:495] detecting cgroup driver to use...
	I1202 12:49:25.005350   59162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 12:49:25.020892   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 12:49:25.033786   59162 docker.go:217] disabling cri-docker service (if available) ...
	I1202 12:49:25.033830   59162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 12:49:25.046899   59162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 12:49:25.061665   59162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 12:49:25.182143   59162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 12:49:25.337337   59162 docker.go:233] disabling docker service ...
	I1202 12:49:25.337416   59162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 12:49:25.351809   59162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 12:49:25.364530   59162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 12:49:25.504036   59162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 12:49:25.632181   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 12:49:25.647081   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 12:49:25.667411   59162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1202 12:49:25.667479   59162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:49:25.681441   59162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 12:49:25.681519   59162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:49:25.691687   59162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:49:25.702870   59162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:49:25.714088   59162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 12:49:25.725292   59162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 12:49:25.738529   59162 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 12:49:25.738595   59162 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 12:49:25.753261   59162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 12:49:25.765484   59162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:49:25.904038   59162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 12:49:26.032667   59162 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 12:49:26.032747   59162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 12:49:26.037918   59162 start.go:563] Will wait 60s for crictl version
	I1202 12:49:26.037971   59162 ssh_runner.go:195] Run: which crictl
	I1202 12:49:26.042217   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 12:49:26.085283   59162 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 12:49:26.085366   59162 ssh_runner.go:195] Run: crio --version
	I1202 12:49:26.116801   59162 ssh_runner.go:195] Run: crio --version
	I1202 12:49:26.148743   59162 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1202 12:49:26.149951   59162 main.go:141] libmachine: (old-k8s-version-666766) Calling .GetIP
	I1202 12:49:26.153305   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:26.153770   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:ba:14", ip: ""} in network mk-old-k8s-version-666766: {Iface:virbr2 ExpiryTime:2024-12-02 13:49:15 +0000 UTC Type:0 Mac:52:54:00:79:ba:14 Iaid: IPaddr:192.168.50.171 Prefix:24 Hostname:old-k8s-version-666766 Clientid:01:52:54:00:79:ba:14}
	I1202 12:49:26.153798   59162 main.go:141] libmachine: (old-k8s-version-666766) DBG | domain old-k8s-version-666766 has defined IP address 192.168.50.171 and MAC address 52:54:00:79:ba:14 in network mk-old-k8s-version-666766
	I1202 12:49:26.154023   59162 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1202 12:49:26.158365   59162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:49:26.170771   59162 kubeadm.go:883] updating cluster {Name:old-k8s-version-666766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-666766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 12:49:26.170878   59162 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1202 12:49:26.170922   59162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:49:26.225247   59162 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1202 12:49:26.225297   59162 ssh_runner.go:195] Run: which lz4
	I1202 12:49:26.229347   59162 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 12:49:26.233473   59162 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 12:49:26.233496   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1202 12:49:27.919465   59162 crio.go:462] duration metric: took 1.690140342s to copy over tarball
	I1202 12:49:27.919545   59162 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 12:49:30.918534   59162 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.998953634s)
	I1202 12:49:30.918565   59162 crio.go:469] duration metric: took 2.999069019s to extract the tarball
	I1202 12:49:30.918575   59162 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 12:49:30.961913   59162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:49:30.997617   59162 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1202 12:49:30.997643   59162 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 12:49:30.997721   59162 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:49:30.997747   59162 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1202 12:49:30.997764   59162 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:49:30.997809   59162 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:49:30.997764   59162 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1202 12:49:30.997841   59162 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:49:30.997867   59162 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1202 12:49:30.998093   59162 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:49:30.999546   59162 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1202 12:49:30.999545   59162 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:49:30.999559   59162 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:49:30.999566   59162 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:49:30.999563   59162 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:49:30.999573   59162 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1202 12:49:30.999605   59162 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:49:30.999649   59162 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1202 12:49:31.156546   59162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:49:31.161111   59162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1202 12:49:31.161497   59162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:49:31.163507   59162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:49:31.167556   59162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1202 12:49:31.172889   59162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:49:31.193006   59162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1202 12:49:31.259248   59162 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1202 12:49:31.259314   59162 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:49:31.259364   59162 ssh_runner.go:195] Run: which crictl
	I1202 12:49:31.336680   59162 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1202 12:49:31.336735   59162 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1202 12:49:31.336768   59162 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1202 12:49:31.336782   59162 ssh_runner.go:195] Run: which crictl
	I1202 12:49:31.336801   59162 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:49:31.336809   59162 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1202 12:49:31.336845   59162 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:49:31.336849   59162 ssh_runner.go:195] Run: which crictl
	I1202 12:49:31.336877   59162 ssh_runner.go:195] Run: which crictl
	I1202 12:49:31.353260   59162 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1202 12:49:31.353297   59162 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1202 12:49:31.353332   59162 ssh_runner.go:195] Run: which crictl
	I1202 12:49:31.353335   59162 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1202 12:49:31.353353   59162 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:49:31.353382   59162 ssh_runner.go:195] Run: which crictl
	I1202 12:49:31.355742   59162 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1202 12:49:31.355771   59162 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1202 12:49:31.355793   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:49:31.355802   59162 ssh_runner.go:195] Run: which crictl
	I1202 12:49:31.355864   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:49:31.355899   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:49:31.355923   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1202 12:49:31.363921   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:49:31.363956   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1202 12:49:31.378164   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1202 12:49:31.507702   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:49:31.509665   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1202 12:49:31.509700   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:49:31.509775   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:49:31.509789   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1202 12:49:31.517642   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:49:31.565884   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1202 12:49:31.674305   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1202 12:49:31.675833   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1202 12:49:31.675926   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1202 12:49:31.675984   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1202 12:49:31.676032   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1202 12:49:31.676078   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1202 12:49:31.725644   59162 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1202 12:49:31.768169   59162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1202 12:49:31.845365   59162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1202 12:49:31.845389   59162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1202 12:49:31.845518   59162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1202 12:49:31.849860   59162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1202 12:49:31.849900   59162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1202 12:49:31.849931   59162 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1202 12:49:31.984729   59162 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:49:32.125610   59162 cache_images.go:92] duration metric: took 1.127946706s to LoadCachedImages
	W1202 12:49:32.125693   59162 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20033-6257/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1202 12:49:32.125724   59162 kubeadm.go:934] updating node { 192.168.50.171 8443 v1.20.0 crio true true} ...
	I1202 12:49:32.125840   59162 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-666766 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.171
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-666766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 12:49:32.125917   59162 ssh_runner.go:195] Run: crio config
	I1202 12:49:32.178994   59162 cni.go:84] Creating CNI manager for ""
	I1202 12:49:32.179023   59162 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:49:32.179034   59162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 12:49:32.179059   59162 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.171 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-666766 NodeName:old-k8s-version-666766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.171"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.171 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1202 12:49:32.179286   59162 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.171
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-666766"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.171
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.171"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 12:49:32.179361   59162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1202 12:49:32.190222   59162 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 12:49:32.190285   59162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 12:49:32.203516   59162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1202 12:49:32.224050   59162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 12:49:32.242367   59162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1202 12:49:32.261511   59162 ssh_runner.go:195] Run: grep 192.168.50.171	control-plane.minikube.internal$ /etc/hosts
	I1202 12:49:32.265916   59162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.171	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:49:32.278675   59162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:49:32.417517   59162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:49:32.435640   59162 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766 for IP: 192.168.50.171
	I1202 12:49:32.435666   59162 certs.go:194] generating shared ca certs ...
	I1202 12:49:32.435685   59162 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:49:32.435876   59162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 12:49:32.435942   59162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 12:49:32.435954   59162 certs.go:256] generating profile certs ...
	I1202 12:49:32.497161   59162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.key
	I1202 12:49:32.497301   59162 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.key.ee2023be
	I1202 12:49:32.497374   59162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/proxy-client.key
	I1202 12:49:32.497553   59162 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 12:49:32.497608   59162 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 12:49:32.497622   59162 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 12:49:32.497658   59162 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 12:49:32.497691   59162 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 12:49:32.497724   59162 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 12:49:32.497788   59162 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:49:32.498745   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 12:49:32.545055   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 12:49:32.576366   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 12:49:32.624246   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 12:49:32.669700   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 12:49:32.701918   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 12:49:32.732552   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 12:49:32.759038   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 12:49:32.786597   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 12:49:32.814267   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 12:49:32.840008   59162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 12:49:32.864126   59162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 12:49:32.881000   59162 ssh_runner.go:195] Run: openssl version
	I1202 12:49:32.887459   59162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 12:49:32.898673   59162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 12:49:32.903747   59162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:49:32.903799   59162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 12:49:32.910063   59162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 12:49:32.925155   59162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 12:49:32.936829   59162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 12:49:32.941372   59162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:49:32.941422   59162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 12:49:32.947355   59162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 12:49:32.958057   59162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 12:49:32.968981   59162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:49:32.973723   59162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:49:32.973762   59162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:49:32.980009   59162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 12:49:32.991039   59162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:49:32.995877   59162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 12:49:33.001954   59162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 12:49:33.007893   59162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 12:49:33.014139   59162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 12:49:33.020015   59162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 12:49:33.025839   59162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 12:49:33.031692   59162 kubeadm.go:392] StartCluster: {Name:old-k8s-version-666766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-666766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.171 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:49:33.031820   59162 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 12:49:33.031878   59162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:49:33.076584   59162 cri.go:89] found id: ""
	I1202 12:49:33.076659   59162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 12:49:33.087367   59162 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1202 12:49:33.087390   59162 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1202 12:49:33.087463   59162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 12:49:33.097565   59162 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 12:49:33.099009   59162 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-666766" does not appear in /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:49:33.100046   59162 kubeconfig.go:62] /home/jenkins/minikube-integration/20033-6257/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-666766" cluster setting kubeconfig missing "old-k8s-version-666766" context setting]
	I1202 12:49:33.101549   59162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:49:33.149258   59162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 12:49:33.164522   59162 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.171
	I1202 12:49:33.164576   59162 kubeadm.go:1160] stopping kube-system containers ...
	I1202 12:49:33.164592   59162 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 12:49:33.164655   59162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:49:33.219490   59162 cri.go:89] found id: ""
	I1202 12:49:33.219569   59162 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 12:49:33.235857   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:49:33.245367   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:49:33.245396   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:49:33.245438   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:49:33.254763   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:49:33.254807   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:49:33.263930   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:49:33.272783   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:49:33.272848   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:49:33.283095   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:49:33.291967   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:49:33.292016   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:49:33.301213   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:49:33.310481   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:49:33.310538   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:49:33.320077   59162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:49:33.329459   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:49:33.452541   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:49:34.405461   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:49:34.649650   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:49:34.762395   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:49:34.857840   59162 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:49:34.857934   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:35.358436   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:35.858633   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:36.359037   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:36.858862   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:37.358416   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:37.858748   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:38.358112   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:38.858499   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:39.358254   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:39.858579   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:40.358104   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:40.857969   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:41.358065   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:41.858023   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:42.358398   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:42.858567   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:43.358640   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:43.858575   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:44.358492   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:44.858029   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:45.358949   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:45.858815   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:46.358074   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:46.858861   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:47.358899   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:47.859031   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:48.358457   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:48.858179   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:49.358386   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:49.858957   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:50.358508   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:50.858301   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:51.358855   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:51.858184   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:52.358409   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:52.858439   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:53.358134   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:53.858544   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:54.358799   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:54.858628   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:55.358711   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:55.858992   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:56.358236   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:56.858719   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:57.358278   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:57.858663   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:58.358448   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:58.858140   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:59.358283   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:49:59.858473   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:00.358070   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:00.859060   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:01.358980   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:01.858827   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:02.358465   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:02.858323   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:03.359021   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:03.858058   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:04.358494   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:04.858768   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:05.358095   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:05.857995   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:06.358837   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:06.858884   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:07.358903   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:07.858041   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:08.358753   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:08.858080   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:09.358240   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:09.858015   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:10.358988   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:10.858997   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:11.358682   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:11.858059   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:12.358996   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:12.858247   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:13.358939   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:13.858827   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:14.358027   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:14.858022   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:15.358485   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:15.858019   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:16.358841   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:16.858363   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:17.358419   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:17.858858   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:18.358583   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:18.858684   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:19.358960   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:19.858628   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:20.358844   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:20.858065   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:21.358704   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:21.858570   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:22.358766   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:22.858459   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:23.358829   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:23.858931   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:24.358709   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:24.858701   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:25.358459   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:25.858562   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:26.358362   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:26.858094   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:27.358277   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:27.858631   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:28.358794   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:28.858586   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:29.358728   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:29.858268   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:30.358813   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:30.858855   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:31.358288   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:31.858236   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:32.358738   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:32.858331   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:33.358870   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:33.858131   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:34.358077   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:34.858323   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:50:34.858401   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:50:34.903490   59162 cri.go:89] found id: ""
	I1202 12:50:34.903515   59162 logs.go:282] 0 containers: []
	W1202 12:50:34.903522   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:50:34.903528   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:50:34.903593   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:50:34.937787   59162 cri.go:89] found id: ""
	I1202 12:50:34.937812   59162 logs.go:282] 0 containers: []
	W1202 12:50:34.937819   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:50:34.937825   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:50:34.937868   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:50:34.969359   59162 cri.go:89] found id: ""
	I1202 12:50:34.969381   59162 logs.go:282] 0 containers: []
	W1202 12:50:34.969399   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:50:34.969405   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:50:34.969452   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:50:35.001071   59162 cri.go:89] found id: ""
	I1202 12:50:35.001097   59162 logs.go:282] 0 containers: []
	W1202 12:50:35.001106   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:50:35.001113   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:50:35.001177   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:50:35.031599   59162 cri.go:89] found id: ""
	I1202 12:50:35.031619   59162 logs.go:282] 0 containers: []
	W1202 12:50:35.031626   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:50:35.031632   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:50:35.031673   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:50:35.062408   59162 cri.go:89] found id: ""
	I1202 12:50:35.062430   59162 logs.go:282] 0 containers: []
	W1202 12:50:35.062438   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:50:35.062443   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:50:35.062486   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:50:35.095966   59162 cri.go:89] found id: ""
	I1202 12:50:35.095991   59162 logs.go:282] 0 containers: []
	W1202 12:50:35.096001   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:50:35.096009   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:50:35.096066   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:50:35.129923   59162 cri.go:89] found id: ""
	I1202 12:50:35.129949   59162 logs.go:282] 0 containers: []
	W1202 12:50:35.129956   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:50:35.129964   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:50:35.129974   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:50:35.182945   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:50:35.182973   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:50:35.197216   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:50:35.197238   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:50:35.321313   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:50:35.321341   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:50:35.321368   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:50:35.401152   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:50:35.401183   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:50:37.939988   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:37.953594   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:50:37.953643   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:50:37.996742   59162 cri.go:89] found id: ""
	I1202 12:50:37.996762   59162 logs.go:282] 0 containers: []
	W1202 12:50:37.996770   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:50:37.996776   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:50:37.996822   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:50:38.029621   59162 cri.go:89] found id: ""
	I1202 12:50:38.029643   59162 logs.go:282] 0 containers: []
	W1202 12:50:38.029651   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:50:38.029656   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:50:38.029705   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:50:38.075201   59162 cri.go:89] found id: ""
	I1202 12:50:38.075224   59162 logs.go:282] 0 containers: []
	W1202 12:50:38.075231   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:50:38.075237   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:50:38.075279   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:50:38.107896   59162 cri.go:89] found id: ""
	I1202 12:50:38.107929   59162 logs.go:282] 0 containers: []
	W1202 12:50:38.107940   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:50:38.107947   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:50:38.108000   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:50:38.143011   59162 cri.go:89] found id: ""
	I1202 12:50:38.143034   59162 logs.go:282] 0 containers: []
	W1202 12:50:38.143042   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:50:38.143047   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:50:38.143098   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:50:38.177872   59162 cri.go:89] found id: ""
	I1202 12:50:38.177902   59162 logs.go:282] 0 containers: []
	W1202 12:50:38.177912   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:50:38.177919   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:50:38.177968   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:50:38.211801   59162 cri.go:89] found id: ""
	I1202 12:50:38.211825   59162 logs.go:282] 0 containers: []
	W1202 12:50:38.211834   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:50:38.211839   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:50:38.211887   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:50:38.266087   59162 cri.go:89] found id: ""
	I1202 12:50:38.266120   59162 logs.go:282] 0 containers: []
	W1202 12:50:38.266131   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:50:38.266142   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:50:38.266155   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:50:38.314513   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:50:38.314539   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:50:38.328275   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:50:38.328309   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:50:38.402288   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:50:38.402307   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:50:38.402318   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:50:38.479851   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:50:38.479881   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:50:41.019619   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:41.032630   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:50:41.032699   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:50:41.066461   59162 cri.go:89] found id: ""
	I1202 12:50:41.066484   59162 logs.go:282] 0 containers: []
	W1202 12:50:41.066491   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:50:41.066496   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:50:41.066542   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:50:41.102425   59162 cri.go:89] found id: ""
	I1202 12:50:41.102455   59162 logs.go:282] 0 containers: []
	W1202 12:50:41.102464   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:50:41.102469   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:50:41.102531   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:50:41.134605   59162 cri.go:89] found id: ""
	I1202 12:50:41.134628   59162 logs.go:282] 0 containers: []
	W1202 12:50:41.134635   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:50:41.134641   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:50:41.134688   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:50:41.174030   59162 cri.go:89] found id: ""
	I1202 12:50:41.174055   59162 logs.go:282] 0 containers: []
	W1202 12:50:41.174065   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:50:41.174073   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:50:41.174137   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:50:41.211962   59162 cri.go:89] found id: ""
	I1202 12:50:41.211983   59162 logs.go:282] 0 containers: []
	W1202 12:50:41.211991   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:50:41.211996   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:50:41.212040   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:50:41.244589   59162 cri.go:89] found id: ""
	I1202 12:50:41.244610   59162 logs.go:282] 0 containers: []
	W1202 12:50:41.244618   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:50:41.244623   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:50:41.244675   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:50:41.277092   59162 cri.go:89] found id: ""
	I1202 12:50:41.277114   59162 logs.go:282] 0 containers: []
	W1202 12:50:41.277121   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:50:41.277127   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:50:41.277189   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:50:41.308255   59162 cri.go:89] found id: ""
	I1202 12:50:41.308278   59162 logs.go:282] 0 containers: []
	W1202 12:50:41.308285   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:50:41.308293   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:50:41.308303   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:50:41.321361   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:50:41.321383   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:50:41.399066   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:50:41.399090   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:50:41.399100   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:50:41.476028   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:50:41.476063   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:50:41.515716   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:50:41.515749   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:50:44.067598   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:44.080960   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:50:44.081024   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:50:44.118291   59162 cri.go:89] found id: ""
	I1202 12:50:44.118322   59162 logs.go:282] 0 containers: []
	W1202 12:50:44.118333   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:50:44.118340   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:50:44.118399   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:50:44.152588   59162 cri.go:89] found id: ""
	I1202 12:50:44.152612   59162 logs.go:282] 0 containers: []
	W1202 12:50:44.152619   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:50:44.152625   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:50:44.152667   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:50:44.188054   59162 cri.go:89] found id: ""
	I1202 12:50:44.188078   59162 logs.go:282] 0 containers: []
	W1202 12:50:44.188085   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:50:44.188092   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:50:44.188149   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:50:44.223345   59162 cri.go:89] found id: ""
	I1202 12:50:44.223371   59162 logs.go:282] 0 containers: []
	W1202 12:50:44.223394   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:50:44.223400   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:50:44.223473   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:50:44.259634   59162 cri.go:89] found id: ""
	I1202 12:50:44.259657   59162 logs.go:282] 0 containers: []
	W1202 12:50:44.259667   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:50:44.259675   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:50:44.259730   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:50:44.298341   59162 cri.go:89] found id: ""
	I1202 12:50:44.298375   59162 logs.go:282] 0 containers: []
	W1202 12:50:44.298383   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:50:44.298389   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:50:44.298449   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:50:44.338934   59162 cri.go:89] found id: ""
	I1202 12:50:44.338960   59162 logs.go:282] 0 containers: []
	W1202 12:50:44.338967   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:50:44.338973   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:50:44.339078   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:50:44.376864   59162 cri.go:89] found id: ""
	I1202 12:50:44.376901   59162 logs.go:282] 0 containers: []
	W1202 12:50:44.376913   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:50:44.376923   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:50:44.376939   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:50:44.426163   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:50:44.426193   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:50:44.438760   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:50:44.438783   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:50:44.514915   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:50:44.514943   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:50:44.514957   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:50:44.591849   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:50:44.591881   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:50:47.141424   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:47.154996   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:50:47.155056   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:50:47.195276   59162 cri.go:89] found id: ""
	I1202 12:50:47.195307   59162 logs.go:282] 0 containers: []
	W1202 12:50:47.195317   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:50:47.195324   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:50:47.195379   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:50:47.229896   59162 cri.go:89] found id: ""
	I1202 12:50:47.229924   59162 logs.go:282] 0 containers: []
	W1202 12:50:47.229934   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:50:47.229941   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:50:47.230001   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:50:47.266430   59162 cri.go:89] found id: ""
	I1202 12:50:47.266457   59162 logs.go:282] 0 containers: []
	W1202 12:50:47.266467   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:50:47.266481   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:50:47.266541   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:50:47.300133   59162 cri.go:89] found id: ""
	I1202 12:50:47.300164   59162 logs.go:282] 0 containers: []
	W1202 12:50:47.300182   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:50:47.300189   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:50:47.300266   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:50:47.335226   59162 cri.go:89] found id: ""
	I1202 12:50:47.335246   59162 logs.go:282] 0 containers: []
	W1202 12:50:47.335254   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:50:47.335259   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:50:47.335311   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:50:47.372068   59162 cri.go:89] found id: ""
	I1202 12:50:47.372088   59162 logs.go:282] 0 containers: []
	W1202 12:50:47.372096   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:50:47.372101   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:50:47.372154   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:50:47.409699   59162 cri.go:89] found id: ""
	I1202 12:50:47.409726   59162 logs.go:282] 0 containers: []
	W1202 12:50:47.409737   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:50:47.409744   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:50:47.409797   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:50:47.446860   59162 cri.go:89] found id: ""
	I1202 12:50:47.446880   59162 logs.go:282] 0 containers: []
	W1202 12:50:47.446888   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:50:47.446895   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:50:47.446906   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:50:47.459679   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:50:47.459702   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:50:47.537038   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:50:47.537066   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:50:47.537083   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:50:47.619163   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:50:47.619195   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:50:47.659243   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:50:47.659274   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:50:50.212385   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:50.226412   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:50:50.226471   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:50:50.265844   59162 cri.go:89] found id: ""
	I1202 12:50:50.265865   59162 logs.go:282] 0 containers: []
	W1202 12:50:50.265872   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:50:50.265877   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:50:50.265920   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:50:50.305140   59162 cri.go:89] found id: ""
	I1202 12:50:50.305178   59162 logs.go:282] 0 containers: []
	W1202 12:50:50.305190   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:50:50.305197   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:50:50.305255   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:50:50.350711   59162 cri.go:89] found id: ""
	I1202 12:50:50.350741   59162 logs.go:282] 0 containers: []
	W1202 12:50:50.350752   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:50:50.350760   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:50:50.350828   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:50:50.395705   59162 cri.go:89] found id: ""
	I1202 12:50:50.395736   59162 logs.go:282] 0 containers: []
	W1202 12:50:50.395748   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:50:50.395755   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:50:50.395816   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:50:50.445104   59162 cri.go:89] found id: ""
	I1202 12:50:50.445130   59162 logs.go:282] 0 containers: []
	W1202 12:50:50.445140   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:50:50.445148   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:50:50.445223   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:50:50.494200   59162 cri.go:89] found id: ""
	I1202 12:50:50.494233   59162 logs.go:282] 0 containers: []
	W1202 12:50:50.494244   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:50:50.494252   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:50:50.494306   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:50:50.534146   59162 cri.go:89] found id: ""
	I1202 12:50:50.534171   59162 logs.go:282] 0 containers: []
	W1202 12:50:50.534181   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:50:50.534186   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:50:50.534232   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:50:50.569941   59162 cri.go:89] found id: ""
	I1202 12:50:50.569966   59162 logs.go:282] 0 containers: []
	W1202 12:50:50.569974   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:50:50.569982   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:50:50.569993   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:50:50.623329   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:50:50.623370   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:50:50.637179   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:50:50.637204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:50:50.704071   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:50:50.704105   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:50:50.704119   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:50:50.788440   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:50:50.788470   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:50:53.330184   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:53.344067   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:50:53.344120   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:50:53.381534   59162 cri.go:89] found id: ""
	I1202 12:50:53.381559   59162 logs.go:282] 0 containers: []
	W1202 12:50:53.381569   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:50:53.381616   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:50:53.381665   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:50:53.421395   59162 cri.go:89] found id: ""
	I1202 12:50:53.421422   59162 logs.go:282] 0 containers: []
	W1202 12:50:53.421430   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:50:53.421435   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:50:53.421488   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:50:53.459012   59162 cri.go:89] found id: ""
	I1202 12:50:53.459043   59162 logs.go:282] 0 containers: []
	W1202 12:50:53.459066   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:50:53.459075   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:50:53.459170   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:50:53.502381   59162 cri.go:89] found id: ""
	I1202 12:50:53.502412   59162 logs.go:282] 0 containers: []
	W1202 12:50:53.502423   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:50:53.502431   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:50:53.502497   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:50:53.543975   59162 cri.go:89] found id: ""
	I1202 12:50:53.543998   59162 logs.go:282] 0 containers: []
	W1202 12:50:53.544008   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:50:53.544016   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:50:53.544077   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:50:53.586888   59162 cri.go:89] found id: ""
	I1202 12:50:53.586912   59162 logs.go:282] 0 containers: []
	W1202 12:50:53.586921   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:50:53.586929   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:50:53.586988   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:50:53.624926   59162 cri.go:89] found id: ""
	I1202 12:50:53.624955   59162 logs.go:282] 0 containers: []
	W1202 12:50:53.624967   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:50:53.624972   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:50:53.625017   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:50:53.659870   59162 cri.go:89] found id: ""
	I1202 12:50:53.659891   59162 logs.go:282] 0 containers: []
	W1202 12:50:53.659898   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:50:53.659906   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:50:53.659918   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:50:53.710543   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:50:53.710577   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:50:53.725254   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:50:53.725292   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:50:53.797664   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:50:53.797693   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:50:53.797709   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:50:53.875569   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:50:53.875604   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:50:56.423967   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:56.447944   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:50:56.448007   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:50:56.500317   59162 cri.go:89] found id: ""
	I1202 12:50:56.500346   59162 logs.go:282] 0 containers: []
	W1202 12:50:56.500358   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:50:56.500365   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:50:56.500433   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:50:56.533367   59162 cri.go:89] found id: ""
	I1202 12:50:56.533403   59162 logs.go:282] 0 containers: []
	W1202 12:50:56.533415   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:50:56.533422   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:50:56.533481   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:50:56.564901   59162 cri.go:89] found id: ""
	I1202 12:50:56.564927   59162 logs.go:282] 0 containers: []
	W1202 12:50:56.564936   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:50:56.564942   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:50:56.564987   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:50:56.597640   59162 cri.go:89] found id: ""
	I1202 12:50:56.597667   59162 logs.go:282] 0 containers: []
	W1202 12:50:56.597677   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:50:56.597684   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:50:56.597745   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:50:56.630203   59162 cri.go:89] found id: ""
	I1202 12:50:56.630231   59162 logs.go:282] 0 containers: []
	W1202 12:50:56.630241   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:50:56.630248   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:50:56.630306   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:50:56.665059   59162 cri.go:89] found id: ""
	I1202 12:50:56.665085   59162 logs.go:282] 0 containers: []
	W1202 12:50:56.665094   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:50:56.665102   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:50:56.665161   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:50:56.705930   59162 cri.go:89] found id: ""
	I1202 12:50:56.705961   59162 logs.go:282] 0 containers: []
	W1202 12:50:56.705972   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:50:56.705980   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:50:56.706046   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:50:56.739276   59162 cri.go:89] found id: ""
	I1202 12:50:56.739306   59162 logs.go:282] 0 containers: []
	W1202 12:50:56.739317   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:50:56.739328   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:50:56.739344   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:50:56.817211   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:50:56.817241   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:50:56.817255   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:50:56.896491   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:50:56.896525   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:50:56.935376   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:50:56.935409   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:50:56.989190   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:50:56.989215   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:50:59.505123   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:50:59.518375   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:50:59.518455   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:50:59.553923   59162 cri.go:89] found id: ""
	I1202 12:50:59.553950   59162 logs.go:282] 0 containers: []
	W1202 12:50:59.553958   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:50:59.553964   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:50:59.554017   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:50:59.587666   59162 cri.go:89] found id: ""
	I1202 12:50:59.587696   59162 logs.go:282] 0 containers: []
	W1202 12:50:59.587707   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:50:59.587715   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:50:59.587779   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:50:59.622551   59162 cri.go:89] found id: ""
	I1202 12:50:59.622582   59162 logs.go:282] 0 containers: []
	W1202 12:50:59.622592   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:50:59.622599   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:50:59.622646   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:50:59.660638   59162 cri.go:89] found id: ""
	I1202 12:50:59.660659   59162 logs.go:282] 0 containers: []
	W1202 12:50:59.660665   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:50:59.660671   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:50:59.660717   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:50:59.694197   59162 cri.go:89] found id: ""
	I1202 12:50:59.694219   59162 logs.go:282] 0 containers: []
	W1202 12:50:59.694226   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:50:59.694232   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:50:59.694275   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:50:59.727249   59162 cri.go:89] found id: ""
	I1202 12:50:59.727275   59162 logs.go:282] 0 containers: []
	W1202 12:50:59.727285   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:50:59.727292   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:50:59.727353   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:50:59.764262   59162 cri.go:89] found id: ""
	I1202 12:50:59.764287   59162 logs.go:282] 0 containers: []
	W1202 12:50:59.764296   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:50:59.764303   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:50:59.764364   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:50:59.796823   59162 cri.go:89] found id: ""
	I1202 12:50:59.796847   59162 logs.go:282] 0 containers: []
	W1202 12:50:59.796855   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:50:59.796862   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:50:59.796877   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:50:59.844825   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:50:59.844849   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:50:59.858613   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:50:59.858636   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:50:59.923953   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:50:59.923979   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:50:59.923993   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:50:59.999744   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:50:59.999774   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:02.542143   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:02.555993   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:02.556065   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:02.589957   59162 cri.go:89] found id: ""
	I1202 12:51:02.589985   59162 logs.go:282] 0 containers: []
	W1202 12:51:02.589995   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:02.590002   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:02.590046   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:02.625263   59162 cri.go:89] found id: ""
	I1202 12:51:02.625285   59162 logs.go:282] 0 containers: []
	W1202 12:51:02.625293   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:02.625298   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:02.625344   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:02.662075   59162 cri.go:89] found id: ""
	I1202 12:51:02.662096   59162 logs.go:282] 0 containers: []
	W1202 12:51:02.662103   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:02.662109   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:02.662162   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:02.696993   59162 cri.go:89] found id: ""
	I1202 12:51:02.697015   59162 logs.go:282] 0 containers: []
	W1202 12:51:02.697022   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:02.697027   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:02.697073   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:02.729345   59162 cri.go:89] found id: ""
	I1202 12:51:02.729381   59162 logs.go:282] 0 containers: []
	W1202 12:51:02.729393   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:02.729400   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:02.729456   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:02.767690   59162 cri.go:89] found id: ""
	I1202 12:51:02.767714   59162 logs.go:282] 0 containers: []
	W1202 12:51:02.767721   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:02.767727   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:02.767786   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:02.802061   59162 cri.go:89] found id: ""
	I1202 12:51:02.802087   59162 logs.go:282] 0 containers: []
	W1202 12:51:02.802094   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:02.802100   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:02.802153   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:02.839420   59162 cri.go:89] found id: ""
	I1202 12:51:02.839448   59162 logs.go:282] 0 containers: []
	W1202 12:51:02.839457   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:02.839466   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:02.839476   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:02.876628   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:02.876655   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:02.926250   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:02.926276   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:02.939371   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:02.939402   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:03.008219   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:03.008255   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:03.008269   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:05.589974   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:05.602698   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:05.602763   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:05.640464   59162 cri.go:89] found id: ""
	I1202 12:51:05.640488   59162 logs.go:282] 0 containers: []
	W1202 12:51:05.640500   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:05.640508   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:05.640562   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:05.679304   59162 cri.go:89] found id: ""
	I1202 12:51:05.679332   59162 logs.go:282] 0 containers: []
	W1202 12:51:05.679340   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:05.679346   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:05.679398   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:05.722425   59162 cri.go:89] found id: ""
	I1202 12:51:05.722449   59162 logs.go:282] 0 containers: []
	W1202 12:51:05.722456   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:05.722461   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:05.722519   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:05.757568   59162 cri.go:89] found id: ""
	I1202 12:51:05.757593   59162 logs.go:282] 0 containers: []
	W1202 12:51:05.757606   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:05.757614   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:05.757668   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:05.791931   59162 cri.go:89] found id: ""
	I1202 12:51:05.791952   59162 logs.go:282] 0 containers: []
	W1202 12:51:05.791959   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:05.791965   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:05.792024   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:05.822676   59162 cri.go:89] found id: ""
	I1202 12:51:05.822704   59162 logs.go:282] 0 containers: []
	W1202 12:51:05.822712   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:05.822718   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:05.822776   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:05.855744   59162 cri.go:89] found id: ""
	I1202 12:51:05.855765   59162 logs.go:282] 0 containers: []
	W1202 12:51:05.855773   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:05.855778   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:05.855826   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:05.889456   59162 cri.go:89] found id: ""
	I1202 12:51:05.889489   59162 logs.go:282] 0 containers: []
	W1202 12:51:05.889500   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:05.889511   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:05.889525   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:05.902415   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:05.902438   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:05.973342   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:05.973364   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:05.973377   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:06.048810   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:06.048839   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:06.088378   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:06.088416   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:08.640951   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:08.656274   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:08.656333   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:08.689704   59162 cri.go:89] found id: ""
	I1202 12:51:08.689725   59162 logs.go:282] 0 containers: []
	W1202 12:51:08.689733   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:08.689747   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:08.689812   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:08.723732   59162 cri.go:89] found id: ""
	I1202 12:51:08.723760   59162 logs.go:282] 0 containers: []
	W1202 12:51:08.723770   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:08.723777   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:08.723836   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:08.759256   59162 cri.go:89] found id: ""
	I1202 12:51:08.759284   59162 logs.go:282] 0 containers: []
	W1202 12:51:08.759292   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:08.759297   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:08.759351   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:08.793783   59162 cri.go:89] found id: ""
	I1202 12:51:08.793812   59162 logs.go:282] 0 containers: []
	W1202 12:51:08.793822   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:08.793830   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:08.793891   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:08.827373   59162 cri.go:89] found id: ""
	I1202 12:51:08.827404   59162 logs.go:282] 0 containers: []
	W1202 12:51:08.827414   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:08.827423   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:08.827476   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:08.860432   59162 cri.go:89] found id: ""
	I1202 12:51:08.860458   59162 logs.go:282] 0 containers: []
	W1202 12:51:08.860469   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:08.860478   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:08.860535   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:08.892152   59162 cri.go:89] found id: ""
	I1202 12:51:08.892178   59162 logs.go:282] 0 containers: []
	W1202 12:51:08.892187   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:08.892194   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:08.892269   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:08.924290   59162 cri.go:89] found id: ""
	I1202 12:51:08.924310   59162 logs.go:282] 0 containers: []
	W1202 12:51:08.924322   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:08.924332   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:08.924348   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:08.976001   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:08.976029   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:08.989424   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:08.989452   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:09.058099   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:09.058120   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:09.058138   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:09.136659   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:09.136685   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:11.679992   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:11.694719   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:11.694787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:11.742805   59162 cri.go:89] found id: ""
	I1202 12:51:11.742832   59162 logs.go:282] 0 containers: []
	W1202 12:51:11.742843   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:11.742850   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:11.742897   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:11.782717   59162 cri.go:89] found id: ""
	I1202 12:51:11.782748   59162 logs.go:282] 0 containers: []
	W1202 12:51:11.782758   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:11.782765   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:11.782820   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:11.817417   59162 cri.go:89] found id: ""
	I1202 12:51:11.817448   59162 logs.go:282] 0 containers: []
	W1202 12:51:11.817460   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:11.817467   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:11.817518   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:11.850252   59162 cri.go:89] found id: ""
	I1202 12:51:11.850277   59162 logs.go:282] 0 containers: []
	W1202 12:51:11.850288   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:11.850296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:11.850357   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:11.889498   59162 cri.go:89] found id: ""
	I1202 12:51:11.889529   59162 logs.go:282] 0 containers: []
	W1202 12:51:11.889540   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:11.889547   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:11.889605   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:11.922382   59162 cri.go:89] found id: ""
	I1202 12:51:11.922429   59162 logs.go:282] 0 containers: []
	W1202 12:51:11.922440   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:11.922452   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:11.922512   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:11.961059   59162 cri.go:89] found id: ""
	I1202 12:51:11.961093   59162 logs.go:282] 0 containers: []
	W1202 12:51:11.961104   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:11.961112   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:11.961172   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:11.996375   59162 cri.go:89] found id: ""
	I1202 12:51:11.996405   59162 logs.go:282] 0 containers: []
	W1202 12:51:11.996416   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:11.996427   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:11.996446   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:12.048601   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:12.048626   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:12.061575   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:12.061599   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:12.132267   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:12.132293   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:12.132307   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:12.209148   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:12.209176   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:14.754316   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:14.767076   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:14.767145   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:14.798643   59162 cri.go:89] found id: ""
	I1202 12:51:14.798668   59162 logs.go:282] 0 containers: []
	W1202 12:51:14.798677   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:14.798684   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:14.798740   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:14.830242   59162 cri.go:89] found id: ""
	I1202 12:51:14.830269   59162 logs.go:282] 0 containers: []
	W1202 12:51:14.830278   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:14.830290   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:14.830340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:14.861734   59162 cri.go:89] found id: ""
	I1202 12:51:14.861758   59162 logs.go:282] 0 containers: []
	W1202 12:51:14.861765   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:14.861770   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:14.861826   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:14.899552   59162 cri.go:89] found id: ""
	I1202 12:51:14.899584   59162 logs.go:282] 0 containers: []
	W1202 12:51:14.899594   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:14.899602   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:14.899657   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:14.930581   59162 cri.go:89] found id: ""
	I1202 12:51:14.930608   59162 logs.go:282] 0 containers: []
	W1202 12:51:14.930616   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:14.930621   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:14.930667   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:14.962875   59162 cri.go:89] found id: ""
	I1202 12:51:14.962904   59162 logs.go:282] 0 containers: []
	W1202 12:51:14.962912   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:14.962917   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:14.962968   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:14.997766   59162 cri.go:89] found id: ""
	I1202 12:51:14.997788   59162 logs.go:282] 0 containers: []
	W1202 12:51:14.997795   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:14.997801   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:14.997861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:15.032808   59162 cri.go:89] found id: ""
	I1202 12:51:15.032836   59162 logs.go:282] 0 containers: []
	W1202 12:51:15.032847   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:15.032856   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:15.032870   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:15.108907   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:15.108937   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:15.148602   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:15.148628   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:15.197264   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:15.197301   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:15.211587   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:15.211614   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:15.283214   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:17.783474   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:17.796148   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:17.796199   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:17.834534   59162 cri.go:89] found id: ""
	I1202 12:51:17.834559   59162 logs.go:282] 0 containers: []
	W1202 12:51:17.834568   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:17.834576   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:17.834631   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:17.872112   59162 cri.go:89] found id: ""
	I1202 12:51:17.872140   59162 logs.go:282] 0 containers: []
	W1202 12:51:17.872151   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:17.872159   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:17.872206   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:17.904114   59162 cri.go:89] found id: ""
	I1202 12:51:17.904147   59162 logs.go:282] 0 containers: []
	W1202 12:51:17.904154   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:17.904174   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:17.904220   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:17.937594   59162 cri.go:89] found id: ""
	I1202 12:51:17.937618   59162 logs.go:282] 0 containers: []
	W1202 12:51:17.937629   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:17.937637   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:17.937703   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:17.969967   59162 cri.go:89] found id: ""
	I1202 12:51:17.969995   59162 logs.go:282] 0 containers: []
	W1202 12:51:17.970005   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:17.970012   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:17.970072   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:18.002574   59162 cri.go:89] found id: ""
	I1202 12:51:18.002602   59162 logs.go:282] 0 containers: []
	W1202 12:51:18.002611   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:18.002619   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:18.002674   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:18.037671   59162 cri.go:89] found id: ""
	I1202 12:51:18.037705   59162 logs.go:282] 0 containers: []
	W1202 12:51:18.037718   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:18.037727   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:18.037794   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:18.072002   59162 cri.go:89] found id: ""
	I1202 12:51:18.072084   59162 logs.go:282] 0 containers: []
	W1202 12:51:18.072097   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:18.072106   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:18.072121   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:18.122680   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:18.122713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:18.136884   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:18.136914   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:18.207374   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:18.207404   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:18.207420   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:18.287584   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:18.287616   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:20.828147   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:20.840755   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:20.840843   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:20.872892   59162 cri.go:89] found id: ""
	I1202 12:51:20.872932   59162 logs.go:282] 0 containers: []
	W1202 12:51:20.872944   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:20.872952   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:20.873004   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:20.906504   59162 cri.go:89] found id: ""
	I1202 12:51:20.906532   59162 logs.go:282] 0 containers: []
	W1202 12:51:20.906542   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:20.906549   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:20.906607   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:20.938350   59162 cri.go:89] found id: ""
	I1202 12:51:20.938377   59162 logs.go:282] 0 containers: []
	W1202 12:51:20.938388   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:20.938395   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:20.938446   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:20.969862   59162 cri.go:89] found id: ""
	I1202 12:51:20.969887   59162 logs.go:282] 0 containers: []
	W1202 12:51:20.969898   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:20.969906   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:20.969959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:21.003129   59162 cri.go:89] found id: ""
	I1202 12:51:21.003158   59162 logs.go:282] 0 containers: []
	W1202 12:51:21.003168   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:21.003175   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:21.003231   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:21.037844   59162 cri.go:89] found id: ""
	I1202 12:51:21.037867   59162 logs.go:282] 0 containers: []
	W1202 12:51:21.037874   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:21.037880   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:21.037922   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:21.071197   59162 cri.go:89] found id: ""
	I1202 12:51:21.071224   59162 logs.go:282] 0 containers: []
	W1202 12:51:21.071234   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:21.071242   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:21.071293   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:21.105200   59162 cri.go:89] found id: ""
	I1202 12:51:21.105226   59162 logs.go:282] 0 containers: []
	W1202 12:51:21.105236   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:21.105246   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:21.105260   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:21.156986   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:21.157013   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:21.170525   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:21.170547   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:21.236893   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:21.236915   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:21.236934   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:21.313982   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:21.314019   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:23.856036   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:23.868548   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:23.868608   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:23.903235   59162 cri.go:89] found id: ""
	I1202 12:51:23.903259   59162 logs.go:282] 0 containers: []
	W1202 12:51:23.903266   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:23.903273   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:23.903323   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:23.936917   59162 cri.go:89] found id: ""
	I1202 12:51:23.936944   59162 logs.go:282] 0 containers: []
	W1202 12:51:23.936955   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:23.936963   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:23.937022   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:23.969543   59162 cri.go:89] found id: ""
	I1202 12:51:23.969589   59162 logs.go:282] 0 containers: []
	W1202 12:51:23.969598   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:23.969604   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:23.969658   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:24.005536   59162 cri.go:89] found id: ""
	I1202 12:51:24.005560   59162 logs.go:282] 0 containers: []
	W1202 12:51:24.005571   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:24.005579   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:24.005638   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:24.039951   59162 cri.go:89] found id: ""
	I1202 12:51:24.039978   59162 logs.go:282] 0 containers: []
	W1202 12:51:24.039987   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:24.039993   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:24.040039   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:24.073109   59162 cri.go:89] found id: ""
	I1202 12:51:24.073133   59162 logs.go:282] 0 containers: []
	W1202 12:51:24.073141   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:24.073147   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:24.073207   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:24.105915   59162 cri.go:89] found id: ""
	I1202 12:51:24.105937   59162 logs.go:282] 0 containers: []
	W1202 12:51:24.105945   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:24.105950   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:24.106010   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:24.137512   59162 cri.go:89] found id: ""
	I1202 12:51:24.137535   59162 logs.go:282] 0 containers: []
	W1202 12:51:24.137542   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:24.137553   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:24.137566   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:24.194640   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:24.194668   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:24.210110   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:24.210143   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:24.280109   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:24.280131   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:24.280145   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:24.357990   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:24.358021   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:26.898156   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:26.911024   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:26.911082   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:26.944105   59162 cri.go:89] found id: ""
	I1202 12:51:26.944134   59162 logs.go:282] 0 containers: []
	W1202 12:51:26.944142   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:26.944148   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:26.944194   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:26.977653   59162 cri.go:89] found id: ""
	I1202 12:51:26.977684   59162 logs.go:282] 0 containers: []
	W1202 12:51:26.977696   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:26.977704   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:26.977760   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:27.009444   59162 cri.go:89] found id: ""
	I1202 12:51:27.009472   59162 logs.go:282] 0 containers: []
	W1202 12:51:27.009489   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:27.009497   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:27.009573   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:27.046386   59162 cri.go:89] found id: ""
	I1202 12:51:27.046408   59162 logs.go:282] 0 containers: []
	W1202 12:51:27.046415   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:27.046420   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:27.046466   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:27.080256   59162 cri.go:89] found id: ""
	I1202 12:51:27.080279   59162 logs.go:282] 0 containers: []
	W1202 12:51:27.080286   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:27.080291   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:27.080344   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:27.113197   59162 cri.go:89] found id: ""
	I1202 12:51:27.113223   59162 logs.go:282] 0 containers: []
	W1202 12:51:27.113234   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:27.113241   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:27.113304   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:27.146479   59162 cri.go:89] found id: ""
	I1202 12:51:27.146505   59162 logs.go:282] 0 containers: []
	W1202 12:51:27.146516   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:27.146522   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:27.146579   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:27.185246   59162 cri.go:89] found id: ""
	I1202 12:51:27.185279   59162 logs.go:282] 0 containers: []
	W1202 12:51:27.185290   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:27.185302   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:27.185321   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:27.235630   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:27.235657   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:27.249805   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:27.249839   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:27.319642   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:27.319674   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:27.319688   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:27.401472   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:27.401498   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:29.941429   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:29.953845   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:29.953901   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:29.986985   59162 cri.go:89] found id: ""
	I1202 12:51:29.987016   59162 logs.go:282] 0 containers: []
	W1202 12:51:29.987025   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:29.987031   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:29.987080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:30.020690   59162 cri.go:89] found id: ""
	I1202 12:51:30.020714   59162 logs.go:282] 0 containers: []
	W1202 12:51:30.020725   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:30.020736   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:30.020791   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:30.055987   59162 cri.go:89] found id: ""
	I1202 12:51:30.056013   59162 logs.go:282] 0 containers: []
	W1202 12:51:30.056021   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:30.056026   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:30.056074   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:30.090940   59162 cri.go:89] found id: ""
	I1202 12:51:30.090971   59162 logs.go:282] 0 containers: []
	W1202 12:51:30.090983   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:30.090990   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:30.091049   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:30.122776   59162 cri.go:89] found id: ""
	I1202 12:51:30.122806   59162 logs.go:282] 0 containers: []
	W1202 12:51:30.122817   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:30.122825   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:30.122888   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:30.158584   59162 cri.go:89] found id: ""
	I1202 12:51:30.158604   59162 logs.go:282] 0 containers: []
	W1202 12:51:30.158611   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:30.158617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:30.158663   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:30.196854   59162 cri.go:89] found id: ""
	I1202 12:51:30.196878   59162 logs.go:282] 0 containers: []
	W1202 12:51:30.196887   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:30.196893   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:30.196939   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:30.245656   59162 cri.go:89] found id: ""
	I1202 12:51:30.245681   59162 logs.go:282] 0 containers: []
	W1202 12:51:30.245687   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:30.245695   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:30.245707   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:30.261640   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:30.261673   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:30.331167   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:30.331189   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:30.331200   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:30.409262   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:30.409295   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:30.447704   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:30.447731   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:33.000007   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:33.012842   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:33.012894   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:33.051399   59162 cri.go:89] found id: ""
	I1202 12:51:33.051425   59162 logs.go:282] 0 containers: []
	W1202 12:51:33.051433   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:33.051439   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:33.051483   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:33.088059   59162 cri.go:89] found id: ""
	I1202 12:51:33.088093   59162 logs.go:282] 0 containers: []
	W1202 12:51:33.088103   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:33.088111   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:33.088166   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:33.122754   59162 cri.go:89] found id: ""
	I1202 12:51:33.122780   59162 logs.go:282] 0 containers: []
	W1202 12:51:33.122790   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:33.122800   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:33.122853   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:33.155217   59162 cri.go:89] found id: ""
	I1202 12:51:33.155243   59162 logs.go:282] 0 containers: []
	W1202 12:51:33.155253   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:33.155260   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:33.155314   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:33.187862   59162 cri.go:89] found id: ""
	I1202 12:51:33.187883   59162 logs.go:282] 0 containers: []
	W1202 12:51:33.187890   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:33.187900   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:33.187957   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:33.220913   59162 cri.go:89] found id: ""
	I1202 12:51:33.220939   59162 logs.go:282] 0 containers: []
	W1202 12:51:33.220950   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:33.220957   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:33.221012   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:33.258572   59162 cri.go:89] found id: ""
	I1202 12:51:33.258594   59162 logs.go:282] 0 containers: []
	W1202 12:51:33.258600   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:33.258605   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:33.258647   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:33.293844   59162 cri.go:89] found id: ""
	I1202 12:51:33.293875   59162 logs.go:282] 0 containers: []
	W1202 12:51:33.293886   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:33.293896   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:33.293909   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:33.349416   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:33.349447   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:33.365011   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:33.365034   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:33.442845   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:33.442876   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:33.442892   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:33.523658   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:33.523690   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:36.063016   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:36.076505   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:36.076557   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:36.111143   59162 cri.go:89] found id: ""
	I1202 12:51:36.111170   59162 logs.go:282] 0 containers: []
	W1202 12:51:36.111182   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:36.111189   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:36.111246   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:36.145475   59162 cri.go:89] found id: ""
	I1202 12:51:36.145499   59162 logs.go:282] 0 containers: []
	W1202 12:51:36.145510   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:36.145517   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:36.145577   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:36.181948   59162 cri.go:89] found id: ""
	I1202 12:51:36.181974   59162 logs.go:282] 0 containers: []
	W1202 12:51:36.181982   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:36.181988   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:36.182032   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:36.214793   59162 cri.go:89] found id: ""
	I1202 12:51:36.214815   59162 logs.go:282] 0 containers: []
	W1202 12:51:36.214823   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:36.214830   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:36.214873   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:36.247817   59162 cri.go:89] found id: ""
	I1202 12:51:36.247850   59162 logs.go:282] 0 containers: []
	W1202 12:51:36.247861   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:36.247869   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:36.247932   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:36.285782   59162 cri.go:89] found id: ""
	I1202 12:51:36.285809   59162 logs.go:282] 0 containers: []
	W1202 12:51:36.285818   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:36.285823   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:36.285872   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:36.321241   59162 cri.go:89] found id: ""
	I1202 12:51:36.321269   59162 logs.go:282] 0 containers: []
	W1202 12:51:36.321280   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:36.321298   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:36.321355   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:36.361821   59162 cri.go:89] found id: ""
	I1202 12:51:36.361854   59162 logs.go:282] 0 containers: []
	W1202 12:51:36.361865   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:36.361877   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:36.361892   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:36.436865   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:36.436897   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:36.473455   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:36.473484   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:36.524200   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:36.524238   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:36.536759   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:36.536781   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:36.605706   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:39.106295   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:39.119484   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:39.119557   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:39.155989   59162 cri.go:89] found id: ""
	I1202 12:51:39.156011   59162 logs.go:282] 0 containers: []
	W1202 12:51:39.156018   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:39.156023   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:39.156066   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:39.192946   59162 cri.go:89] found id: ""
	I1202 12:51:39.192976   59162 logs.go:282] 0 containers: []
	W1202 12:51:39.192987   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:39.192998   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:39.193056   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:39.226579   59162 cri.go:89] found id: ""
	I1202 12:51:39.226599   59162 logs.go:282] 0 containers: []
	W1202 12:51:39.226619   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:39.226625   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:39.226672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:39.263657   59162 cri.go:89] found id: ""
	I1202 12:51:39.263683   59162 logs.go:282] 0 containers: []
	W1202 12:51:39.263693   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:39.263700   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:39.263755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:39.296462   59162 cri.go:89] found id: ""
	I1202 12:51:39.296483   59162 logs.go:282] 0 containers: []
	W1202 12:51:39.296491   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:39.296496   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:39.296542   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:39.332160   59162 cri.go:89] found id: ""
	I1202 12:51:39.332184   59162 logs.go:282] 0 containers: []
	W1202 12:51:39.332191   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:39.332197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:39.332256   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:39.372765   59162 cri.go:89] found id: ""
	I1202 12:51:39.372797   59162 logs.go:282] 0 containers: []
	W1202 12:51:39.372808   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:39.372816   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:39.372879   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:39.408701   59162 cri.go:89] found id: ""
	I1202 12:51:39.408722   59162 logs.go:282] 0 containers: []
	W1202 12:51:39.408728   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:39.408737   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:39.408748   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:39.462582   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:39.462611   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:39.475710   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:39.475736   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:39.542639   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:39.542663   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:39.542676   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:39.620644   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:39.620686   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:42.160002   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:42.174994   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:42.175044   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:42.207175   59162 cri.go:89] found id: ""
	I1202 12:51:42.207200   59162 logs.go:282] 0 containers: []
	W1202 12:51:42.207211   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:42.207219   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:42.207271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:42.241881   59162 cri.go:89] found id: ""
	I1202 12:51:42.241912   59162 logs.go:282] 0 containers: []
	W1202 12:51:42.241922   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:42.241927   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:42.241974   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:42.276335   59162 cri.go:89] found id: ""
	I1202 12:51:42.276363   59162 logs.go:282] 0 containers: []
	W1202 12:51:42.276373   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:42.276381   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:42.276441   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:42.311349   59162 cri.go:89] found id: ""
	I1202 12:51:42.311387   59162 logs.go:282] 0 containers: []
	W1202 12:51:42.311399   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:42.311407   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:42.311466   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:42.343455   59162 cri.go:89] found id: ""
	I1202 12:51:42.343490   59162 logs.go:282] 0 containers: []
	W1202 12:51:42.343503   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:42.343514   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:42.343574   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:42.385084   59162 cri.go:89] found id: ""
	I1202 12:51:42.385114   59162 logs.go:282] 0 containers: []
	W1202 12:51:42.385125   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:42.385133   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:42.385194   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:42.430592   59162 cri.go:89] found id: ""
	I1202 12:51:42.430620   59162 logs.go:282] 0 containers: []
	W1202 12:51:42.430631   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:42.430638   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:42.430700   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:42.497347   59162 cri.go:89] found id: ""
	I1202 12:51:42.497373   59162 logs.go:282] 0 containers: []
	W1202 12:51:42.497380   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:42.497389   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:42.497402   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:42.549370   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:42.549398   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:42.562233   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:42.562257   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:42.628011   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:42.628038   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:42.628055   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:42.712743   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:42.712783   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:45.252675   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:45.266283   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:45.266338   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:45.301122   59162 cri.go:89] found id: ""
	I1202 12:51:45.301155   59162 logs.go:282] 0 containers: []
	W1202 12:51:45.301163   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:45.301168   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:45.301213   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:45.333889   59162 cri.go:89] found id: ""
	I1202 12:51:45.333916   59162 logs.go:282] 0 containers: []
	W1202 12:51:45.333924   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:45.333929   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:45.333978   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:45.367606   59162 cri.go:89] found id: ""
	I1202 12:51:45.367631   59162 logs.go:282] 0 containers: []
	W1202 12:51:45.367639   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:45.367645   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:45.367692   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:45.400744   59162 cri.go:89] found id: ""
	I1202 12:51:45.400768   59162 logs.go:282] 0 containers: []
	W1202 12:51:45.400776   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:45.400781   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:45.400825   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:45.433224   59162 cri.go:89] found id: ""
	I1202 12:51:45.433246   59162 logs.go:282] 0 containers: []
	W1202 12:51:45.433253   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:45.433258   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:45.433304   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:45.466276   59162 cri.go:89] found id: ""
	I1202 12:51:45.466300   59162 logs.go:282] 0 containers: []
	W1202 12:51:45.466308   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:45.466313   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:45.466407   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:45.501150   59162 cri.go:89] found id: ""
	I1202 12:51:45.501172   59162 logs.go:282] 0 containers: []
	W1202 12:51:45.501179   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:45.501184   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:45.501228   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:45.535095   59162 cri.go:89] found id: ""
	I1202 12:51:45.535116   59162 logs.go:282] 0 containers: []
	W1202 12:51:45.535125   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:45.535137   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:45.535150   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:45.584271   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:45.584301   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:45.598422   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:45.598442   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:45.670821   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:45.670854   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:45.670870   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:45.752839   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:45.752873   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:48.291306   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:48.304635   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:48.304700   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:48.343387   59162 cri.go:89] found id: ""
	I1202 12:51:48.343408   59162 logs.go:282] 0 containers: []
	W1202 12:51:48.343420   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:48.343426   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:48.343470   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:48.377642   59162 cri.go:89] found id: ""
	I1202 12:51:48.377669   59162 logs.go:282] 0 containers: []
	W1202 12:51:48.377680   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:48.377687   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:48.377742   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:48.411288   59162 cri.go:89] found id: ""
	I1202 12:51:48.411310   59162 logs.go:282] 0 containers: []
	W1202 12:51:48.411318   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:48.411323   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:48.411367   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:48.453376   59162 cri.go:89] found id: ""
	I1202 12:51:48.453401   59162 logs.go:282] 0 containers: []
	W1202 12:51:48.453409   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:48.453415   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:48.453466   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:48.485789   59162 cri.go:89] found id: ""
	I1202 12:51:48.485815   59162 logs.go:282] 0 containers: []
	W1202 12:51:48.485825   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:48.485832   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:48.485892   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:48.526074   59162 cri.go:89] found id: ""
	I1202 12:51:48.526103   59162 logs.go:282] 0 containers: []
	W1202 12:51:48.526113   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:48.526128   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:48.526178   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:48.560078   59162 cri.go:89] found id: ""
	I1202 12:51:48.560099   59162 logs.go:282] 0 containers: []
	W1202 12:51:48.560106   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:48.560111   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:48.560163   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:48.594296   59162 cri.go:89] found id: ""
	I1202 12:51:48.594323   59162 logs.go:282] 0 containers: []
	W1202 12:51:48.594331   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:48.594339   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:48.594351   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:48.645790   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:48.645816   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:48.659638   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:48.659661   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:48.725282   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:48.725308   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:48.725320   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:48.805166   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:48.805192   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:51.343369   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:51.356817   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:51.356880   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:51.391162   59162 cri.go:89] found id: ""
	I1202 12:51:51.391187   59162 logs.go:282] 0 containers: []
	W1202 12:51:51.391197   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:51.391208   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:51.391250   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:51.424183   59162 cri.go:89] found id: ""
	I1202 12:51:51.424203   59162 logs.go:282] 0 containers: []
	W1202 12:51:51.424210   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:51.424215   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:51.424282   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:51.457371   59162 cri.go:89] found id: ""
	I1202 12:51:51.457398   59162 logs.go:282] 0 containers: []
	W1202 12:51:51.457406   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:51.457411   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:51.457468   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:51.494274   59162 cri.go:89] found id: ""
	I1202 12:51:51.494299   59162 logs.go:282] 0 containers: []
	W1202 12:51:51.494307   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:51.494313   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:51.494358   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:51.527469   59162 cri.go:89] found id: ""
	I1202 12:51:51.527494   59162 logs.go:282] 0 containers: []
	W1202 12:51:51.527503   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:51.527508   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:51.527570   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:51.560369   59162 cri.go:89] found id: ""
	I1202 12:51:51.560399   59162 logs.go:282] 0 containers: []
	W1202 12:51:51.560407   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:51.560412   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:51.560460   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:51.594916   59162 cri.go:89] found id: ""
	I1202 12:51:51.594942   59162 logs.go:282] 0 containers: []
	W1202 12:51:51.594950   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:51.594956   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:51.595026   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:51.630447   59162 cri.go:89] found id: ""
	I1202 12:51:51.630474   59162 logs.go:282] 0 containers: []
	W1202 12:51:51.630486   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:51.630497   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:51.630510   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:51.643611   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:51.643636   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:51.714033   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:51.714053   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:51.714064   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:51.795398   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:51.795423   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:51.832276   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:51.832304   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:54.384562   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:54.397979   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:54.398032   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:54.431942   59162 cri.go:89] found id: ""
	I1202 12:51:54.431965   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.431973   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:54.431979   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:54.432024   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:54.466033   59162 cri.go:89] found id: ""
	I1202 12:51:54.466054   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.466062   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:54.466067   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:54.466116   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:54.506462   59162 cri.go:89] found id: ""
	I1202 12:51:54.506486   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.506493   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:54.506499   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:54.506545   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:54.539966   59162 cri.go:89] found id: ""
	I1202 12:51:54.539996   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.540006   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:54.540013   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:54.540068   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:54.572987   59162 cri.go:89] found id: ""
	I1202 12:51:54.573027   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.573038   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:54.573046   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:54.573107   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:54.609495   59162 cri.go:89] found id: ""
	I1202 12:51:54.609528   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.609539   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:54.609547   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:54.609593   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:54.643109   59162 cri.go:89] found id: ""
	I1202 12:51:54.643136   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.643148   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:54.643205   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:54.643279   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:54.681113   59162 cri.go:89] found id: ""
	I1202 12:51:54.681151   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.681160   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:54.681168   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:54.681180   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:54.734777   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:54.734806   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:54.748171   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:54.748196   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:54.821609   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:54.821628   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:54.821642   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:54.900306   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:54.900339   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:57.438971   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:57.454128   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:57.454187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:57.489852   59162 cri.go:89] found id: ""
	I1202 12:51:57.489877   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.489885   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:57.489890   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:57.489938   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:57.523496   59162 cri.go:89] found id: ""
	I1202 12:51:57.523515   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.523522   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:57.523528   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:57.523576   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:57.554394   59162 cri.go:89] found id: ""
	I1202 12:51:57.554417   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.554429   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:57.554436   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:57.554497   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:57.586259   59162 cri.go:89] found id: ""
	I1202 12:51:57.586281   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.586291   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:57.586298   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:57.586353   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:57.618406   59162 cri.go:89] found id: ""
	I1202 12:51:57.618427   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.618435   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:57.618440   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:57.618482   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:57.649491   59162 cri.go:89] found id: ""
	I1202 12:51:57.649517   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.649527   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:57.649532   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:57.649575   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:57.682286   59162 cri.go:89] found id: ""
	I1202 12:51:57.682306   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.682313   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:57.682319   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:57.682364   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:57.720929   59162 cri.go:89] found id: ""
	I1202 12:51:57.720956   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.720967   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:57.720977   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:57.720987   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:57.802270   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:57.802302   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:57.841214   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:57.841246   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:57.893691   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:57.893724   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:57.906616   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:57.906640   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:57.973328   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:00.473500   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:00.487912   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:00.487973   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:00.526513   59162 cri.go:89] found id: ""
	I1202 12:52:00.526539   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.526548   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:00.526557   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:00.526620   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:00.561483   59162 cri.go:89] found id: ""
	I1202 12:52:00.561511   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.561519   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:00.561526   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:00.561583   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:00.592435   59162 cri.go:89] found id: ""
	I1202 12:52:00.592473   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.592484   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:00.592491   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:00.592551   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:00.624686   59162 cri.go:89] found id: ""
	I1202 12:52:00.624710   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.624722   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:00.624727   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:00.624771   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:00.662610   59162 cri.go:89] found id: ""
	I1202 12:52:00.662639   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.662650   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:00.662657   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:00.662721   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:00.695972   59162 cri.go:89] found id: ""
	I1202 12:52:00.695993   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.696000   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:00.696006   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:00.696048   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:00.727200   59162 cri.go:89] found id: ""
	I1202 12:52:00.727230   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.727253   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:00.727261   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:00.727316   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:00.761510   59162 cri.go:89] found id: ""
	I1202 12:52:00.761536   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.761545   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:00.761556   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:00.761568   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:00.812287   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:00.812318   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:00.825282   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:00.825309   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:00.894016   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:00.894042   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:00.894065   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:00.972001   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:00.972034   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:03.512982   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:03.528814   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:03.528884   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:03.564137   59162 cri.go:89] found id: ""
	I1202 12:52:03.564159   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.564166   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:03.564173   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:03.564223   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:03.608780   59162 cri.go:89] found id: ""
	I1202 12:52:03.608811   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.608822   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:03.608829   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:03.608891   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:03.644906   59162 cri.go:89] found id: ""
	I1202 12:52:03.644943   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.644954   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:03.644978   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:03.645052   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:03.676732   59162 cri.go:89] found id: ""
	I1202 12:52:03.676754   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.676761   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:03.676767   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:03.676809   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:03.711338   59162 cri.go:89] found id: ""
	I1202 12:52:03.711362   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.711369   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:03.711375   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:03.711424   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:03.743657   59162 cri.go:89] found id: ""
	I1202 12:52:03.743682   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.743689   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:03.743694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:03.743737   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:03.777740   59162 cri.go:89] found id: ""
	I1202 12:52:03.777759   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.777766   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:03.777772   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:03.777818   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:03.811145   59162 cri.go:89] found id: ""
	I1202 12:52:03.811169   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.811179   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:03.811190   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:03.811204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:03.862069   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:03.862093   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:03.875133   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:03.875164   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:03.947077   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:03.947102   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:03.947114   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:04.023458   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:04.023487   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:06.562323   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:06.577498   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:06.577556   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:06.613937   59162 cri.go:89] found id: ""
	I1202 12:52:06.613962   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.613970   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:06.613976   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:06.614023   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:06.647630   59162 cri.go:89] found id: ""
	I1202 12:52:06.647655   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.647662   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:06.647667   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:06.647711   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:06.683758   59162 cri.go:89] found id: ""
	I1202 12:52:06.683783   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.683793   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:06.683800   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:06.683861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:06.722664   59162 cri.go:89] found id: ""
	I1202 12:52:06.722686   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.722694   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:06.722699   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:06.722747   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:06.756255   59162 cri.go:89] found id: ""
	I1202 12:52:06.756280   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.756290   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:06.756296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:06.756340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:06.792350   59162 cri.go:89] found id: ""
	I1202 12:52:06.792376   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.792387   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:06.792394   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:06.792450   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:06.827259   59162 cri.go:89] found id: ""
	I1202 12:52:06.827289   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.827301   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:06.827308   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:06.827367   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:06.858775   59162 cri.go:89] found id: ""
	I1202 12:52:06.858795   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.858802   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:06.858811   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:06.858821   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:06.911764   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:06.911795   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:06.925297   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:06.925326   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:06.993703   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:06.993730   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:06.993744   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:07.073657   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:07.073685   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:09.611640   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:09.626141   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:09.626199   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:09.661406   59162 cri.go:89] found id: ""
	I1202 12:52:09.661425   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.661432   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:09.661439   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:09.661498   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:09.698145   59162 cri.go:89] found id: ""
	I1202 12:52:09.698173   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.698184   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:09.698191   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:09.698252   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:09.732150   59162 cri.go:89] found id: ""
	I1202 12:52:09.732178   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.732189   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:09.732197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:09.732261   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:09.768040   59162 cri.go:89] found id: ""
	I1202 12:52:09.768063   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.768070   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:09.768076   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:09.768130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:09.801038   59162 cri.go:89] found id: ""
	I1202 12:52:09.801064   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.801075   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:09.801082   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:09.801130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:09.841058   59162 cri.go:89] found id: ""
	I1202 12:52:09.841082   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.841089   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:09.841095   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:09.841137   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:09.885521   59162 cri.go:89] found id: ""
	I1202 12:52:09.885541   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.885548   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:09.885554   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:09.885602   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:09.924759   59162 cri.go:89] found id: ""
	I1202 12:52:09.924779   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.924786   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:09.924793   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:09.924804   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:09.968241   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:09.968273   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:10.020282   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:10.020315   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:10.036491   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:10.036519   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:10.113297   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:10.113324   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:10.113339   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:12.688410   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:12.705296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:12.705356   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:12.743097   59162 cri.go:89] found id: ""
	I1202 12:52:12.743119   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.743127   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:12.743133   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:12.743187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:12.778272   59162 cri.go:89] found id: ""
	I1202 12:52:12.778292   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.778299   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:12.778304   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:12.778365   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:12.816087   59162 cri.go:89] found id: ""
	I1202 12:52:12.816116   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.816127   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:12.816134   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:12.816187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:12.850192   59162 cri.go:89] found id: ""
	I1202 12:52:12.850214   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.850221   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:12.850227   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:12.850282   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:12.883325   59162 cri.go:89] found id: ""
	I1202 12:52:12.883351   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.883360   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:12.883367   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:12.883427   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:12.916121   59162 cri.go:89] found id: ""
	I1202 12:52:12.916157   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.916169   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:12.916176   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:12.916251   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:12.946704   59162 cri.go:89] found id: ""
	I1202 12:52:12.946733   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.946746   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:12.946753   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:12.946802   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:12.979010   59162 cri.go:89] found id: ""
	I1202 12:52:12.979041   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.979050   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:12.979062   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:12.979075   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:13.062141   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:13.062171   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:13.111866   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:13.111900   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:13.162470   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:13.162498   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:13.178497   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:13.178525   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:13.245199   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:15.746327   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:15.760092   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:15.760160   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:15.797460   59162 cri.go:89] found id: ""
	I1202 12:52:15.797484   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.797495   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:15.797503   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:15.797563   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:15.829969   59162 cri.go:89] found id: ""
	I1202 12:52:15.829998   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.830009   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:15.830017   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:15.830072   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:15.862390   59162 cri.go:89] found id: ""
	I1202 12:52:15.862418   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.862428   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:15.862435   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:15.862484   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:15.895223   59162 cri.go:89] found id: ""
	I1202 12:52:15.895244   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.895251   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:15.895257   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:15.895311   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:15.933157   59162 cri.go:89] found id: ""
	I1202 12:52:15.933184   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.933192   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:15.933197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:15.933245   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:15.964387   59162 cri.go:89] found id: ""
	I1202 12:52:15.964414   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.964425   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:15.964433   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:15.964487   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:15.996803   59162 cri.go:89] found id: ""
	I1202 12:52:15.996825   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.996832   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:15.996837   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:15.996881   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:16.029364   59162 cri.go:89] found id: ""
	I1202 12:52:16.029394   59162 logs.go:282] 0 containers: []
	W1202 12:52:16.029402   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:16.029411   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:16.029422   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:16.098237   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:16.098264   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:16.098278   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:16.172386   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:16.172414   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:16.216899   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:16.216923   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:16.281565   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:16.281591   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:18.796337   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:18.809573   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:18.809637   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:18.847965   59162 cri.go:89] found id: ""
	I1202 12:52:18.847991   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.847999   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:18.848004   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:18.848053   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:18.883714   59162 cri.go:89] found id: ""
	I1202 12:52:18.883741   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.883751   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:18.883758   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:18.883817   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:18.918581   59162 cri.go:89] found id: ""
	I1202 12:52:18.918605   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.918612   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:18.918617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:18.918672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:18.954394   59162 cri.go:89] found id: ""
	I1202 12:52:18.954426   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.954437   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:18.954443   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:18.954502   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:18.995321   59162 cri.go:89] found id: ""
	I1202 12:52:18.995347   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.995355   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:18.995361   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:18.995423   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:19.034030   59162 cri.go:89] found id: ""
	I1202 12:52:19.034055   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.034066   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:19.034073   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:19.034130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:19.073569   59162 cri.go:89] found id: ""
	I1202 12:52:19.073597   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.073609   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:19.073615   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:19.073662   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:19.112049   59162 cri.go:89] found id: ""
	I1202 12:52:19.112078   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.112090   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:19.112100   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:19.112113   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:19.180480   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:19.180502   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:19.180516   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:19.258236   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:19.258264   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:19.299035   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:19.299053   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:19.352572   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:19.352602   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:21.866524   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:21.879286   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:21.879340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:21.910463   59162 cri.go:89] found id: ""
	I1202 12:52:21.910489   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.910498   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:21.910504   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:21.910551   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:21.943130   59162 cri.go:89] found id: ""
	I1202 12:52:21.943157   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.943165   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:21.943171   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:21.943216   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:21.976969   59162 cri.go:89] found id: ""
	I1202 12:52:21.976990   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.976997   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:21.977002   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:21.977055   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:22.022113   59162 cri.go:89] found id: ""
	I1202 12:52:22.022144   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.022153   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:22.022159   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:22.022218   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:22.057387   59162 cri.go:89] found id: ""
	I1202 12:52:22.057406   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.057413   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:22.057418   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:22.057459   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:22.089832   59162 cri.go:89] found id: ""
	I1202 12:52:22.089866   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.089892   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:22.089900   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:22.089960   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:22.121703   59162 cri.go:89] found id: ""
	I1202 12:52:22.121727   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.121735   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:22.121740   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:22.121789   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:22.155076   59162 cri.go:89] found id: ""
	I1202 12:52:22.155098   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.155108   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:22.155117   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:22.155137   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:22.234831   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:22.234862   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:22.273912   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:22.273945   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:22.327932   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:22.327966   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:22.340890   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:22.340913   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:22.419371   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:24.919868   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:24.935004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:24.935068   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:24.972438   59162 cri.go:89] found id: ""
	I1202 12:52:24.972466   59162 logs.go:282] 0 containers: []
	W1202 12:52:24.972474   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:24.972480   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:24.972525   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:25.009282   59162 cri.go:89] found id: ""
	I1202 12:52:25.009310   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.009320   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:25.009329   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:25.009391   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:25.043227   59162 cri.go:89] found id: ""
	I1202 12:52:25.043254   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.043262   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:25.043267   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:25.043318   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:25.079167   59162 cri.go:89] found id: ""
	I1202 12:52:25.079191   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.079198   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:25.079204   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:25.079263   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:25.110308   59162 cri.go:89] found id: ""
	I1202 12:52:25.110332   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.110340   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:25.110346   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:25.110388   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:25.143804   59162 cri.go:89] found id: ""
	I1202 12:52:25.143830   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.143840   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:25.143846   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:25.143903   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:25.178114   59162 cri.go:89] found id: ""
	I1202 12:52:25.178140   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.178147   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:25.178155   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:25.178204   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:25.212632   59162 cri.go:89] found id: ""
	I1202 12:52:25.212665   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.212675   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:25.212684   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:25.212696   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:25.267733   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:25.267761   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:25.281025   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:25.281048   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:25.346497   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:25.346520   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:25.346531   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:25.437435   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:25.437469   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:27.979493   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:27.993542   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:27.993615   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:28.030681   59162 cri.go:89] found id: ""
	I1202 12:52:28.030705   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.030712   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:28.030718   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:28.030771   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:28.063991   59162 cri.go:89] found id: ""
	I1202 12:52:28.064019   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.064027   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:28.064032   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:28.064080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:28.097983   59162 cri.go:89] found id: ""
	I1202 12:52:28.098018   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.098029   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:28.098038   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:28.098098   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:28.131956   59162 cri.go:89] found id: ""
	I1202 12:52:28.131977   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.131987   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:28.131995   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:28.132071   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:28.170124   59162 cri.go:89] found id: ""
	I1202 12:52:28.170160   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.170171   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:28.170177   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:28.170238   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:28.203127   59162 cri.go:89] found id: ""
	I1202 12:52:28.203149   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.203157   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:28.203163   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:28.203216   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:28.240056   59162 cri.go:89] found id: ""
	I1202 12:52:28.240081   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.240088   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:28.240094   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:28.240142   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:28.276673   59162 cri.go:89] found id: ""
	I1202 12:52:28.276699   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.276710   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:28.276720   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:28.276733   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:28.333435   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:28.333470   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:28.347465   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:28.347491   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:28.432745   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:28.432777   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:28.432792   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:28.515984   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:28.516017   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:31.057069   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:31.070021   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:31.070084   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:31.106501   59162 cri.go:89] found id: ""
	I1202 12:52:31.106530   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.106540   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:31.106547   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:31.106606   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:31.141190   59162 cri.go:89] found id: ""
	I1202 12:52:31.141219   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.141230   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:31.141238   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:31.141298   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:31.176050   59162 cri.go:89] found id: ""
	I1202 12:52:31.176077   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.176087   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:31.176099   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:31.176169   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:31.211740   59162 cri.go:89] found id: ""
	I1202 12:52:31.211769   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.211780   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:31.211786   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:31.211831   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:31.248949   59162 cri.go:89] found id: ""
	I1202 12:52:31.248974   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.248983   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:31.248990   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:31.249044   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:31.284687   59162 cri.go:89] found id: ""
	I1202 12:52:31.284709   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.284717   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:31.284723   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:31.284765   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:31.317972   59162 cri.go:89] found id: ""
	I1202 12:52:31.317997   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.318004   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:31.318010   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:31.318065   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:31.354866   59162 cri.go:89] found id: ""
	I1202 12:52:31.354893   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.354904   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:31.354914   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:31.354927   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:31.425168   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:31.425191   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:31.425202   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:31.508169   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:31.508204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:31.547193   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:31.547220   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:31.601864   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:31.601892   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:34.115652   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:34.131644   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:34.131695   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:34.174473   59162 cri.go:89] found id: ""
	I1202 12:52:34.174500   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.174510   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:34.174518   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:34.174571   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:34.226162   59162 cri.go:89] found id: ""
	I1202 12:52:34.226190   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.226201   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:34.226208   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:34.226271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:34.269202   59162 cri.go:89] found id: ""
	I1202 12:52:34.269230   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.269240   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:34.269248   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:34.269327   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:34.304571   59162 cri.go:89] found id: ""
	I1202 12:52:34.304604   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.304615   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:34.304621   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:34.304670   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:34.339285   59162 cri.go:89] found id: ""
	I1202 12:52:34.339316   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.339327   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:34.339334   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:34.339401   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:34.374919   59162 cri.go:89] found id: ""
	I1202 12:52:34.374952   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.374964   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:34.374973   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:34.375035   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:34.409292   59162 cri.go:89] found id: ""
	I1202 12:52:34.409319   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.409330   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:34.409337   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:34.409404   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:34.442536   59162 cri.go:89] found id: ""
	I1202 12:52:34.442561   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.442568   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:34.442576   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:34.442587   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:34.494551   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:34.494582   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:34.508684   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:34.508713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:34.572790   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:34.572816   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:34.572835   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:34.649327   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:34.649358   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:37.190648   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:37.203913   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:37.203966   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:37.243165   59162 cri.go:89] found id: ""
	I1202 12:52:37.243186   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.243194   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:37.243199   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:37.243246   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:37.279317   59162 cri.go:89] found id: ""
	I1202 12:52:37.279343   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.279351   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:37.279356   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:37.279411   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:37.312655   59162 cri.go:89] found id: ""
	I1202 12:52:37.312684   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.312693   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:37.312702   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:37.312748   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:37.346291   59162 cri.go:89] found id: ""
	I1202 12:52:37.346319   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.346328   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:37.346334   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:37.346382   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:37.381534   59162 cri.go:89] found id: ""
	I1202 12:52:37.381555   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.381563   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:37.381569   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:37.381621   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:37.416990   59162 cri.go:89] found id: ""
	I1202 12:52:37.417013   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.417020   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:37.417026   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:37.417083   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:37.451149   59162 cri.go:89] found id: ""
	I1202 12:52:37.451174   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.451182   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:37.451187   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:37.451233   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:37.485902   59162 cri.go:89] found id: ""
	I1202 12:52:37.485929   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.485940   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:37.485950   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:37.485970   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:37.541615   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:37.541645   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:37.554846   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:37.554866   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:37.622432   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:37.622457   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:37.622471   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:37.708793   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:37.708832   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:40.246822   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:40.260893   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:40.260959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:40.294743   59162 cri.go:89] found id: ""
	I1202 12:52:40.294773   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.294782   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:40.294789   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:40.294845   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:40.338523   59162 cri.go:89] found id: ""
	I1202 12:52:40.338557   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.338570   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:40.338577   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:40.338628   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:40.373134   59162 cri.go:89] found id: ""
	I1202 12:52:40.373162   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.373170   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:40.373176   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:40.373225   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:40.410197   59162 cri.go:89] found id: ""
	I1202 12:52:40.410233   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.410247   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:40.410256   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:40.410333   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:40.442497   59162 cri.go:89] found id: ""
	I1202 12:52:40.442521   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.442530   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:40.442536   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:40.442597   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:40.477835   59162 cri.go:89] found id: ""
	I1202 12:52:40.477863   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.477872   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:40.477879   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:40.477936   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:40.511523   59162 cri.go:89] found id: ""
	I1202 12:52:40.511547   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.511559   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:40.511567   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:40.511628   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:40.545902   59162 cri.go:89] found id: ""
	I1202 12:52:40.545928   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.545942   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:40.545962   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:40.545976   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:40.595638   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:40.595669   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:40.609023   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:40.609043   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:40.680826   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:40.680848   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:40.680866   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:40.756551   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:40.756579   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:43.295761   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:43.308764   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:43.308836   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:43.343229   59162 cri.go:89] found id: ""
	I1202 12:52:43.343258   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.343268   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:43.343276   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:43.343335   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:43.376841   59162 cri.go:89] found id: ""
	I1202 12:52:43.376861   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.376868   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:43.376874   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:43.376918   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:43.415013   59162 cri.go:89] found id: ""
	I1202 12:52:43.415033   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.415041   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:43.415046   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:43.415094   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:43.451563   59162 cri.go:89] found id: ""
	I1202 12:52:43.451590   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.451601   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:43.451608   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:43.451658   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:43.492838   59162 cri.go:89] found id: ""
	I1202 12:52:43.492859   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.492867   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:43.492872   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:43.492934   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:43.531872   59162 cri.go:89] found id: ""
	I1202 12:52:43.531898   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.531908   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:43.531914   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:43.531957   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:43.566235   59162 cri.go:89] found id: ""
	I1202 12:52:43.566260   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.566270   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:43.566277   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:43.566332   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:43.601502   59162 cri.go:89] found id: ""
	I1202 12:52:43.601531   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.601542   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:43.601553   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:43.601567   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:43.650984   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:43.651012   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:43.664273   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:43.664296   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:43.735791   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:43.735819   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:43.735833   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:43.817824   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:43.817861   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:46.356130   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:46.368755   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:46.368835   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:46.404552   59162 cri.go:89] found id: ""
	I1202 12:52:46.404574   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.404582   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:46.404588   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:46.404640   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:46.438292   59162 cri.go:89] found id: ""
	I1202 12:52:46.438318   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.438329   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:46.438337   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:46.438397   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:46.471614   59162 cri.go:89] found id: ""
	I1202 12:52:46.471636   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.471643   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:46.471649   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:46.471752   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:46.502171   59162 cri.go:89] found id: ""
	I1202 12:52:46.502193   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.502201   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:46.502207   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:46.502250   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:46.533820   59162 cri.go:89] found id: ""
	I1202 12:52:46.533842   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.533851   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:46.533859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:46.533914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:46.566891   59162 cri.go:89] found id: ""
	I1202 12:52:46.566918   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.566928   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:46.566936   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:46.566980   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:46.599112   59162 cri.go:89] found id: ""
	I1202 12:52:46.599143   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.599154   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:46.599161   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:46.599215   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:46.630794   59162 cri.go:89] found id: ""
	I1202 12:52:46.630837   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.630849   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:46.630860   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:46.630876   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:46.644180   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:46.644210   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:46.705881   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:46.705921   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:46.705936   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:46.781327   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:46.781359   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:46.820042   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:46.820072   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:49.368930   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:49.381506   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:49.381556   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:49.417928   59162 cri.go:89] found id: ""
	I1202 12:52:49.417955   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.417965   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:49.417977   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:49.418034   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:49.450248   59162 cri.go:89] found id: ""
	I1202 12:52:49.450276   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.450286   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:49.450295   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:49.450366   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:49.484288   59162 cri.go:89] found id: ""
	I1202 12:52:49.484311   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.484318   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:49.484323   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:49.484372   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:49.518565   59162 cri.go:89] found id: ""
	I1202 12:52:49.518585   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.518595   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:49.518602   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:49.518650   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:49.552524   59162 cri.go:89] found id: ""
	I1202 12:52:49.552549   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.552556   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:49.552561   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:49.552609   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:49.586570   59162 cri.go:89] found id: ""
	I1202 12:52:49.586599   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.586610   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:49.586617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:49.586672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:49.622561   59162 cri.go:89] found id: ""
	I1202 12:52:49.622590   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.622601   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:49.622609   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:49.622666   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:49.659092   59162 cri.go:89] found id: ""
	I1202 12:52:49.659117   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.659129   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:49.659152   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:49.659170   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:49.672461   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:49.672491   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:49.738609   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:49.738637   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:49.738670   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:49.820458   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:49.820488   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:49.860240   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:49.860269   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:52.411571   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:52.425037   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:52.425106   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:52.458215   59162 cri.go:89] found id: ""
	I1202 12:52:52.458244   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.458255   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:52.458262   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:52.458316   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:52.491781   59162 cri.go:89] found id: ""
	I1202 12:52:52.491809   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.491820   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:52.491827   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:52.491879   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:52.528829   59162 cri.go:89] found id: ""
	I1202 12:52:52.528855   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.528864   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:52.528870   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:52.528914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:52.560930   59162 cri.go:89] found id: ""
	I1202 12:52:52.560957   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.560965   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:52.560971   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:52.561021   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:52.594102   59162 cri.go:89] found id: ""
	I1202 12:52:52.594139   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.594152   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:52.594160   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:52.594222   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:52.627428   59162 cri.go:89] found id: ""
	I1202 12:52:52.627452   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.627460   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:52.627465   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:52.627529   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:52.659143   59162 cri.go:89] found id: ""
	I1202 12:52:52.659167   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.659175   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:52.659180   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:52.659230   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:52.691603   59162 cri.go:89] found id: ""
	I1202 12:52:52.691625   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.691632   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:52.691640   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:52.691651   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:52.741989   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:52.742016   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:52.755769   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:52.755790   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:52.826397   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:52.826418   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:52.826431   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:52.904705   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:52.904734   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:55.449363   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:55.462294   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:55.462350   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:55.500829   59162 cri.go:89] found id: ""
	I1202 12:52:55.500856   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.500865   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:55.500871   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:55.500927   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:55.533890   59162 cri.go:89] found id: ""
	I1202 12:52:55.533920   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.533931   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:55.533942   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:55.533998   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:55.566686   59162 cri.go:89] found id: ""
	I1202 12:52:55.566715   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.566725   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:55.566736   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:55.566790   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:55.598330   59162 cri.go:89] found id: ""
	I1202 12:52:55.598357   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.598367   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:55.598374   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:55.598429   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:55.630648   59162 cri.go:89] found id: ""
	I1202 12:52:55.630676   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.630686   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:55.630694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:55.630755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:55.664611   59162 cri.go:89] found id: ""
	I1202 12:52:55.664633   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.664640   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:55.664645   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:55.664687   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:55.697762   59162 cri.go:89] found id: ""
	I1202 12:52:55.697789   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.697797   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:55.697803   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:55.697853   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:55.735239   59162 cri.go:89] found id: ""
	I1202 12:52:55.735263   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.735271   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:55.735279   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:55.735292   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:55.805187   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:55.805217   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:55.805233   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:55.888420   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:55.888452   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:55.927535   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:55.927561   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:55.976883   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:55.976909   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:58.490700   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:58.504983   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:58.505053   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:58.541332   59162 cri.go:89] found id: ""
	I1202 12:52:58.541352   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.541359   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:58.541365   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:58.541409   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:58.579437   59162 cri.go:89] found id: ""
	I1202 12:52:58.579459   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.579466   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:58.579472   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:58.579521   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:58.617374   59162 cri.go:89] found id: ""
	I1202 12:52:58.617406   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.617417   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:58.617425   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:58.617486   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:58.653242   59162 cri.go:89] found id: ""
	I1202 12:52:58.653269   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.653280   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:58.653287   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:58.653345   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:58.686171   59162 cri.go:89] found id: ""
	I1202 12:52:58.686201   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.686210   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:58.686215   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:58.686262   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:58.719934   59162 cri.go:89] found id: ""
	I1202 12:52:58.719956   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.719966   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:58.719974   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:58.720030   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:58.759587   59162 cri.go:89] found id: ""
	I1202 12:52:58.759610   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.759619   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:58.759626   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:58.759678   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:58.790885   59162 cri.go:89] found id: ""
	I1202 12:52:58.790908   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.790915   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:58.790922   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:58.790934   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:58.840192   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:58.840220   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:58.853639   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:58.853663   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:58.924643   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:58.924669   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:58.924679   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:59.013916   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:59.013945   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:01.552305   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:01.565577   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:01.565642   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:01.598261   59162 cri.go:89] found id: ""
	I1202 12:53:01.598294   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.598304   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:01.598310   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:01.598377   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:01.631527   59162 cri.go:89] found id: ""
	I1202 12:53:01.631556   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.631565   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:01.631570   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:01.631631   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:01.670788   59162 cri.go:89] found id: ""
	I1202 12:53:01.670812   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.670820   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:01.670826   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:01.670880   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:01.708801   59162 cri.go:89] found id: ""
	I1202 12:53:01.708828   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.708838   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:01.708846   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:01.708914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:01.746053   59162 cri.go:89] found id: ""
	I1202 12:53:01.746074   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.746083   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:01.746120   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:01.746184   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:01.780873   59162 cri.go:89] found id: ""
	I1202 12:53:01.780894   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.780901   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:01.780907   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:01.780951   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:01.817234   59162 cri.go:89] found id: ""
	I1202 12:53:01.817259   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.817269   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:01.817276   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:01.817335   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:01.850277   59162 cri.go:89] found id: ""
	I1202 12:53:01.850302   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.850317   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:01.850327   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:01.850342   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:01.933014   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:01.933055   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:01.971533   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:01.971562   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:02.020280   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:02.020311   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:02.034786   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:02.034814   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:02.104013   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:04.604595   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:04.618004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:04.618057   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:04.651388   59162 cri.go:89] found id: ""
	I1202 12:53:04.651414   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.651428   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:04.651436   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:04.651495   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:04.686973   59162 cri.go:89] found id: ""
	I1202 12:53:04.686998   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.687005   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:04.687019   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:04.687063   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:04.720630   59162 cri.go:89] found id: ""
	I1202 12:53:04.720654   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.720661   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:04.720667   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:04.720724   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:04.754657   59162 cri.go:89] found id: ""
	I1202 12:53:04.754682   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.754689   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:04.754694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:04.754746   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:04.787583   59162 cri.go:89] found id: ""
	I1202 12:53:04.787611   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.787621   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:04.787628   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:04.787686   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:04.818962   59162 cri.go:89] found id: ""
	I1202 12:53:04.818988   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.818999   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:04.819006   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:04.819059   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:04.852015   59162 cri.go:89] found id: ""
	I1202 12:53:04.852035   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.852042   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:04.852047   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:04.852097   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:04.886272   59162 cri.go:89] found id: ""
	I1202 12:53:04.886294   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.886301   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:04.886309   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:04.886320   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:04.934682   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:04.934712   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:04.947889   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:04.947911   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:05.018970   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:05.018995   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:05.019010   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:05.098203   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:05.098233   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:07.637320   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:07.650643   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:07.650706   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:07.683468   59162 cri.go:89] found id: ""
	I1202 12:53:07.683491   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.683499   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:07.683504   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:07.683565   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:07.719765   59162 cri.go:89] found id: ""
	I1202 12:53:07.719792   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.719799   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:07.719805   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:07.719855   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:07.760939   59162 cri.go:89] found id: ""
	I1202 12:53:07.760986   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.760996   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:07.761004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:07.761066   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:07.799175   59162 cri.go:89] found id: ""
	I1202 12:53:07.799219   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.799231   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:07.799239   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:07.799300   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:07.831957   59162 cri.go:89] found id: ""
	I1202 12:53:07.831987   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.831999   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:07.832007   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:07.832067   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:07.865982   59162 cri.go:89] found id: ""
	I1202 12:53:07.866008   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.866015   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:07.866022   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:07.866080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:07.903443   59162 cri.go:89] found id: ""
	I1202 12:53:07.903467   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.903477   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:07.903484   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:07.903541   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:07.939268   59162 cri.go:89] found id: ""
	I1202 12:53:07.939293   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.939300   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:07.939310   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:07.939324   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:07.952959   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:07.952984   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:08.039178   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:08.039207   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:08.039223   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:08.121432   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:08.121469   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:08.164739   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:08.164767   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:10.718599   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:10.731079   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:10.731154   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:10.767605   59162 cri.go:89] found id: ""
	I1202 12:53:10.767626   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.767633   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:10.767639   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:10.767689   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:10.800464   59162 cri.go:89] found id: ""
	I1202 12:53:10.800483   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.800491   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:10.800496   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:10.800554   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:10.840808   59162 cri.go:89] found id: ""
	I1202 12:53:10.840836   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.840853   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:10.840859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:10.840922   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:10.877653   59162 cri.go:89] found id: ""
	I1202 12:53:10.877681   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.877690   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:10.877698   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:10.877755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:10.915849   59162 cri.go:89] found id: ""
	I1202 12:53:10.915873   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.915883   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:10.915891   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:10.915953   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:10.948652   59162 cri.go:89] found id: ""
	I1202 12:53:10.948680   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.948691   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:10.948697   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:10.948755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:10.983126   59162 cri.go:89] found id: ""
	I1202 12:53:10.983154   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.983165   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:10.983172   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:10.983232   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:11.015350   59162 cri.go:89] found id: ""
	I1202 12:53:11.015378   59162 logs.go:282] 0 containers: []
	W1202 12:53:11.015390   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:11.015400   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:11.015414   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:11.028713   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:11.028737   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:11.095904   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:11.095932   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:11.095950   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:11.179078   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:11.179114   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:11.216075   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:11.216106   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:13.774975   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:13.787745   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:13.787804   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:13.821793   59162 cri.go:89] found id: ""
	I1202 12:53:13.821824   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.821834   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:13.821840   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:13.821885   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:13.854831   59162 cri.go:89] found id: ""
	I1202 12:53:13.854855   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.854864   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:13.854871   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:13.854925   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:13.885113   59162 cri.go:89] found id: ""
	I1202 12:53:13.885142   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.885149   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:13.885155   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:13.885201   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:13.915811   59162 cri.go:89] found id: ""
	I1202 12:53:13.915841   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.915851   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:13.915859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:13.915914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:13.948908   59162 cri.go:89] found id: ""
	I1202 12:53:13.948936   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.948946   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:13.948953   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:13.949016   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:13.986502   59162 cri.go:89] found id: ""
	I1202 12:53:13.986531   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.986540   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:13.986548   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:13.986607   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:14.018182   59162 cri.go:89] found id: ""
	I1202 12:53:14.018210   59162 logs.go:282] 0 containers: []
	W1202 12:53:14.018221   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:14.018229   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:14.018287   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:14.054185   59162 cri.go:89] found id: ""
	I1202 12:53:14.054221   59162 logs.go:282] 0 containers: []
	W1202 12:53:14.054233   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:14.054244   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:14.054272   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:14.131353   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:14.131381   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:14.131402   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:14.212787   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:14.212822   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:14.254043   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:14.254073   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:14.309591   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:14.309620   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:16.824827   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:16.838150   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:16.838210   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:16.871550   59162 cri.go:89] found id: ""
	I1202 12:53:16.871570   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.871577   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:16.871582   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:16.871625   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:16.908736   59162 cri.go:89] found id: ""
	I1202 12:53:16.908766   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.908775   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:16.908781   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:16.908844   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:16.941404   59162 cri.go:89] found id: ""
	I1202 12:53:16.941427   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.941437   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:16.941444   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:16.941500   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:16.971984   59162 cri.go:89] found id: ""
	I1202 12:53:16.972011   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.972023   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:16.972030   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:16.972079   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:17.004573   59162 cri.go:89] found id: ""
	I1202 12:53:17.004596   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.004607   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:17.004614   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:17.004661   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:17.037171   59162 cri.go:89] found id: ""
	I1202 12:53:17.037199   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.037210   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:17.037218   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:17.037271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:17.070862   59162 cri.go:89] found id: ""
	I1202 12:53:17.070888   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.070899   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:17.070906   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:17.070959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:17.102642   59162 cri.go:89] found id: ""
	I1202 12:53:17.102668   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.102678   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:17.102688   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:17.102701   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:17.182590   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:17.182623   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:17.224313   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:17.224346   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:17.272831   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:17.272855   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:17.286217   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:17.286240   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:17.357274   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:19.858294   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:19.871731   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:19.871787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:19.906270   59162 cri.go:89] found id: ""
	I1202 12:53:19.906290   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.906297   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:19.906303   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:19.906345   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:19.937769   59162 cri.go:89] found id: ""
	I1202 12:53:19.937790   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.937797   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:19.937802   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:19.937845   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:19.971667   59162 cri.go:89] found id: ""
	I1202 12:53:19.971689   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.971706   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:19.971714   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:19.971787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:20.005434   59162 cri.go:89] found id: ""
	I1202 12:53:20.005455   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.005461   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:20.005467   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:20.005512   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:20.041817   59162 cri.go:89] found id: ""
	I1202 12:53:20.041839   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.041848   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:20.041856   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:20.041906   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:20.073923   59162 cri.go:89] found id: ""
	I1202 12:53:20.073946   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.073958   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:20.073966   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:20.074026   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:20.107360   59162 cri.go:89] found id: ""
	I1202 12:53:20.107398   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.107409   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:20.107416   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:20.107479   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:20.153919   59162 cri.go:89] found id: ""
	I1202 12:53:20.153942   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.153952   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:20.153963   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:20.153977   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:20.211581   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:20.211610   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:20.227589   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:20.227615   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:20.305225   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:20.305250   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:20.305265   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:20.382674   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:20.382713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:22.924662   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:22.940038   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:22.940101   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:22.984768   59162 cri.go:89] found id: ""
	I1202 12:53:22.984795   59162 logs.go:282] 0 containers: []
	W1202 12:53:22.984806   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:22.984815   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:22.984876   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:23.024159   59162 cri.go:89] found id: ""
	I1202 12:53:23.024180   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.024188   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:23.024194   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:23.024254   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:23.059929   59162 cri.go:89] found id: ""
	I1202 12:53:23.059948   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.059956   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:23.059961   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:23.060003   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:23.093606   59162 cri.go:89] found id: ""
	I1202 12:53:23.093627   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.093633   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:23.093639   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:23.093689   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:23.127868   59162 cri.go:89] found id: ""
	I1202 12:53:23.127893   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.127904   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:23.127910   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:23.127965   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:23.164988   59162 cri.go:89] found id: ""
	I1202 12:53:23.165006   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.165013   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:23.165018   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:23.165058   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:23.196389   59162 cri.go:89] found id: ""
	I1202 12:53:23.196412   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.196423   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:23.196430   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:23.196481   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:23.229337   59162 cri.go:89] found id: ""
	I1202 12:53:23.229358   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.229366   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:23.229376   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:23.229404   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:23.284041   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:23.284066   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:23.297861   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:23.297884   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:23.364113   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:23.364131   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:23.364142   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:23.446244   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:23.446273   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:25.986668   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:25.998953   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:25.999013   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:26.034844   59162 cri.go:89] found id: ""
	I1202 12:53:26.034868   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.034876   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:26.034883   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:26.034938   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:26.067050   59162 cri.go:89] found id: ""
	I1202 12:53:26.067076   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.067083   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:26.067089   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:26.067152   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:26.098705   59162 cri.go:89] found id: ""
	I1202 12:53:26.098735   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.098746   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:26.098754   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:26.098812   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:26.131283   59162 cri.go:89] found id: ""
	I1202 12:53:26.131312   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.131321   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:26.131327   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:26.131379   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:26.164905   59162 cri.go:89] found id: ""
	I1202 12:53:26.164933   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.164943   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:26.164950   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:26.165009   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:26.196691   59162 cri.go:89] found id: ""
	I1202 12:53:26.196715   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.196724   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:26.196732   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:26.196789   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:26.227341   59162 cri.go:89] found id: ""
	I1202 12:53:26.227364   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.227374   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:26.227380   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:26.227436   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:26.260569   59162 cri.go:89] found id: ""
	I1202 12:53:26.260589   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.260597   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:26.260606   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:26.260619   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:26.313150   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:26.313175   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:26.327732   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:26.327762   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:26.392748   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:26.392768   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:26.392778   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:26.474456   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:26.474484   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:29.018514   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:29.032328   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:29.032457   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:29.067696   59162 cri.go:89] found id: ""
	I1202 12:53:29.067720   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.067732   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:29.067738   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:29.067794   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:29.101076   59162 cri.go:89] found id: ""
	I1202 12:53:29.101096   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.101103   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:29.101108   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:29.101150   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:29.136446   59162 cri.go:89] found id: ""
	I1202 12:53:29.136473   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.136483   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:29.136489   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:29.136552   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:29.170820   59162 cri.go:89] found id: ""
	I1202 12:53:29.170849   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.170860   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:29.170868   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:29.170931   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:29.205972   59162 cri.go:89] found id: ""
	I1202 12:53:29.206001   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.206012   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:29.206020   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:29.206086   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:29.242118   59162 cri.go:89] found id: ""
	I1202 12:53:29.242155   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.242165   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:29.242172   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:29.242222   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:29.281377   59162 cri.go:89] found id: ""
	I1202 12:53:29.281405   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.281417   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:29.281426   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:29.281487   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:29.316350   59162 cri.go:89] found id: ""
	I1202 12:53:29.316381   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.316393   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:29.316404   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:29.316418   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:29.392609   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:29.392648   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:29.430777   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:29.430804   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:29.484157   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:29.484190   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:29.498434   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:29.498457   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:29.568203   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:32.069043   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:32.081796   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:32.081867   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:32.115767   59162 cri.go:89] found id: ""
	I1202 12:53:32.115789   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.115797   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:32.115802   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:32.115861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:32.145962   59162 cri.go:89] found id: ""
	I1202 12:53:32.145984   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.145992   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:32.145999   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:32.146046   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:32.177709   59162 cri.go:89] found id: ""
	I1202 12:53:32.177734   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.177744   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:32.177752   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:32.177796   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:32.211897   59162 cri.go:89] found id: ""
	I1202 12:53:32.211921   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.211930   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:32.211937   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:32.211994   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:32.244401   59162 cri.go:89] found id: ""
	I1202 12:53:32.244425   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.244434   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:32.244442   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:32.244503   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:32.278097   59162 cri.go:89] found id: ""
	I1202 12:53:32.278123   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.278140   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:32.278151   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:32.278210   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:32.312740   59162 cri.go:89] found id: ""
	I1202 12:53:32.312774   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.312785   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:32.312793   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:32.312860   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:32.345849   59162 cri.go:89] found id: ""
	I1202 12:53:32.345878   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.345889   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:32.345901   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:32.345917   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:32.395961   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:32.395998   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:32.409582   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:32.409609   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:32.473717   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:32.473746   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:32.473763   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:32.548547   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:32.548580   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:35.088628   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:35.102152   59162 kubeadm.go:597] duration metric: took 4m2.014751799s to restartPrimaryControlPlane
	W1202 12:53:35.102217   59162 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 12:53:35.102244   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:53:36.768528   59162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.666262663s)
	I1202 12:53:36.768601   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:53:36.783104   59162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:53:36.792966   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:53:36.802188   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:53:36.802205   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:53:36.802234   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:53:36.811253   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:53:36.811290   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:53:36.820464   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:53:36.829386   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:53:36.829426   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:53:36.838814   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:53:36.847241   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:53:36.847272   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:53:36.856295   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:53:36.864892   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:53:36.864929   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:53:36.873699   59162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:53:37.076297   59162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:55:32.968600   59162 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:55:32.968731   59162 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:55:32.970229   59162 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:55:32.970291   59162 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:55:32.970394   59162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:55:32.970513   59162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:55:32.970629   59162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:55:32.970717   59162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:55:32.972396   59162 out.go:235]   - Generating certificates and keys ...
	I1202 12:55:32.972491   59162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:55:32.972577   59162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:55:32.972734   59162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:55:32.972823   59162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:55:32.972926   59162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:55:32.973006   59162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:55:32.973108   59162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:55:32.973192   59162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:55:32.973318   59162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:55:32.973429   59162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:55:32.973501   59162 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:55:32.973594   59162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:55:32.973658   59162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:55:32.973722   59162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:55:32.973819   59162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:55:32.973903   59162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:55:32.974041   59162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:55:32.974157   59162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:55:32.974206   59162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:55:32.974301   59162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:55:32.976508   59162 out.go:235]   - Booting up control plane ...
	I1202 12:55:32.976620   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:55:32.976741   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:55:32.976842   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:55:32.976957   59162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:55:32.977191   59162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:55:32.977281   59162 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:55:32.977342   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.977505   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.977579   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.977795   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.977906   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978091   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978174   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978394   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978497   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978743   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978756   59162 kubeadm.go:310] 
	I1202 12:55:32.978801   59162 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:55:32.978859   59162 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:55:32.978868   59162 kubeadm.go:310] 
	I1202 12:55:32.978914   59162 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:55:32.978961   59162 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:55:32.979078   59162 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:55:32.979088   59162 kubeadm.go:310] 
	I1202 12:55:32.979230   59162 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:55:32.979279   59162 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:55:32.979337   59162 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:55:32.979346   59162 kubeadm.go:310] 
	I1202 12:55:32.979484   59162 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:55:32.979580   59162 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:55:32.979593   59162 kubeadm.go:310] 
	I1202 12:55:32.979721   59162 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:55:32.979848   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:55:32.979968   59162 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:55:32.980059   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:55:32.980127   59162 kubeadm.go:310] 
	W1202 12:55:32.980202   59162 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 12:55:32.980267   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:55:33.452325   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:55:33.467527   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:55:33.477494   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:55:33.477522   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:55:33.477575   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:55:33.487333   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:55:33.487395   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:55:33.497063   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:55:33.506552   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:55:33.506605   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:55:33.515968   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:55:33.524922   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:55:33.524956   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:55:33.534339   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:55:33.543370   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:55:33.543403   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:55:33.552970   59162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:55:33.624833   59162 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:55:33.624990   59162 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:55:33.767688   59162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:55:33.767796   59162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:55:33.767909   59162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:55:33.935314   59162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:55:33.937193   59162 out.go:235]   - Generating certificates and keys ...
	I1202 12:55:33.937290   59162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:55:33.937402   59162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:55:33.937513   59162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:55:33.937620   59162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:55:33.937722   59162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:55:33.937791   59162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:55:33.937845   59162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:55:33.937896   59162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:55:33.937964   59162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:55:33.938028   59162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:55:33.938061   59162 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:55:33.938108   59162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:55:34.167163   59162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:55:35.008947   59162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:55:35.304057   59162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:55:35.385824   59162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:55:35.409687   59162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:55:35.413131   59162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:55:35.413218   59162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:55:35.569508   59162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:55:35.571455   59162 out.go:235]   - Booting up control plane ...
	I1202 12:55:35.571596   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:55:35.578476   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:55:35.579686   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:55:35.580586   59162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:55:35.582869   59162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:56:15.585409   59162 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:56:15.585530   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:15.585792   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:20.586011   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:20.586257   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:30.586783   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:30.587053   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:50.587516   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:50.587731   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:57:30.586451   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:57:30.586705   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:57:30.586735   59162 kubeadm.go:310] 
	I1202 12:57:30.586786   59162 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:57:30.586842   59162 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:57:30.586859   59162 kubeadm.go:310] 
	I1202 12:57:30.586924   59162 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:57:30.586990   59162 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:57:30.587140   59162 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:57:30.587152   59162 kubeadm.go:310] 
	I1202 12:57:30.587292   59162 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:57:30.587347   59162 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:57:30.587387   59162 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:57:30.587405   59162 kubeadm.go:310] 
	I1202 12:57:30.587557   59162 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:57:30.587642   59162 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:57:30.587655   59162 kubeadm.go:310] 
	I1202 12:57:30.587751   59162 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:57:30.587841   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:57:30.587923   59162 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:57:30.588029   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:57:30.588043   59162 kubeadm.go:310] 
	I1202 12:57:30.588959   59162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:57:30.589087   59162 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:57:30.589211   59162 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:57:30.589277   59162 kubeadm.go:394] duration metric: took 7m57.557592718s to StartCluster
	I1202 12:57:30.589312   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:57:30.589358   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:57:30.634368   59162 cri.go:89] found id: ""
	I1202 12:57:30.634402   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.634414   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:57:30.634423   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:57:30.634489   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:57:30.669582   59162 cri.go:89] found id: ""
	I1202 12:57:30.669605   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.669617   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:57:30.669625   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:57:30.669679   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:57:30.707779   59162 cri.go:89] found id: ""
	I1202 12:57:30.707805   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.707815   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:57:30.707823   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:57:30.707878   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:57:30.745724   59162 cri.go:89] found id: ""
	I1202 12:57:30.745751   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.745761   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:57:30.745768   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:57:30.745816   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:57:30.782946   59162 cri.go:89] found id: ""
	I1202 12:57:30.782969   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.782980   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:57:30.782987   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:57:30.783040   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:57:30.821743   59162 cri.go:89] found id: ""
	I1202 12:57:30.821776   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.821787   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:57:30.821795   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:57:30.821843   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:57:30.859754   59162 cri.go:89] found id: ""
	I1202 12:57:30.859783   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.859793   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:57:30.859801   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:57:30.859876   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:57:30.893632   59162 cri.go:89] found id: ""
	I1202 12:57:30.893660   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.893668   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:57:30.893677   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:57:30.893690   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:57:30.946387   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:57:30.946413   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:57:30.960540   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:57:30.960565   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:57:31.038246   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:57:31.038267   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:57:31.038279   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:57:31.155549   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:57:31.155584   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 12:57:31.221709   59162 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1202 12:57:31.221773   59162 out.go:270] * 
	* 
	W1202 12:57:31.221846   59162 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:57:31.221868   59162 out.go:270] * 
	* 
	W1202 12:57:31.222987   59162 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 12:57:31.226661   59162 out.go:201] 
	W1202 12:57:31.227691   59162 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:57:31.227739   59162 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 12:57:31.227763   59162 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 12:57:31.229696   59162 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-666766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 2 (243.714451ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-666766 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p newest-cni-983490 --memory=2200 --alsologtostderr   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-953044            | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-983490             | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-983490                  | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-983490 --memory=2200 --alsologtostderr   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658679                  | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658679                                   | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-983490 image list                           | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	| delete  | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:49 UTC |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-666766        | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-953044                 | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666766             | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653783  | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC | 02 Dec 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC |                     |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653783       | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC |                     |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 12:51:53
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 12:51:53.986642   61173 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:51:53.986878   61173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:51:53.986887   61173 out.go:358] Setting ErrFile to fd 2...
	I1202 12:51:53.986891   61173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:51:53.987040   61173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:51:53.987531   61173 out.go:352] Setting JSON to false
	I1202 12:51:53.988496   61173 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5666,"bootTime":1733138248,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:51:53.988587   61173 start.go:139] virtualization: kvm guest
	I1202 12:51:53.990552   61173 out.go:177] * [default-k8s-diff-port-653783] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:51:53.991681   61173 notify.go:220] Checking for updates...
	I1202 12:51:53.991692   61173 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:51:53.992827   61173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:51:53.993900   61173 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:51:53.995110   61173 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:51:53.996273   61173 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:51:53.997326   61173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:51:53.998910   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:51:53.999556   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:53.999630   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.014837   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I1202 12:51:54.015203   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.015691   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.015717   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.016024   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.016213   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.016420   61173 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:51:54.016702   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:54.016740   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.031103   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43443
	I1202 12:51:54.031480   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.031846   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.031862   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.032152   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.032313   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.066052   61173 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 12:51:54.067269   61173 start.go:297] selected driver: kvm2
	I1202 12:51:54.067282   61173 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:51:54.067398   61173 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:51:54.068083   61173 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:51:54.068159   61173 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 12:51:54.082839   61173 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 12:51:54.083361   61173 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:51:54.083405   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:51:54.083450   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:51:54.083491   61173 start.go:340] cluster config:
	{Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:51:54.083581   61173 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:51:54.085236   61173 out.go:177] * Starting "default-k8s-diff-port-653783" primary control-plane node in "default-k8s-diff-port-653783" cluster
	I1202 12:51:54.086247   61173 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:51:54.086275   61173 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 12:51:54.086281   61173 cache.go:56] Caching tarball of preloaded images
	I1202 12:51:54.086363   61173 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 12:51:54.086377   61173 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 12:51:54.086471   61173 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/config.json ...
	I1202 12:51:54.086683   61173 start.go:360] acquireMachinesLock for default-k8s-diff-port-653783: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:51:54.086721   61173 start.go:364] duration metric: took 21.68µs to acquireMachinesLock for "default-k8s-diff-port-653783"
	I1202 12:51:54.086742   61173 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:51:54.086750   61173 fix.go:54] fixHost starting: 
	I1202 12:51:54.087016   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:54.087049   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.100439   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I1202 12:51:54.100860   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.101284   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.101305   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.101699   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.101899   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.102027   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 12:51:54.103398   61173 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653783: state=Running err=<nil>
	W1202 12:51:54.103428   61173 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:51:54.104862   61173 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-653783" VM ...
	I1202 12:51:51.250214   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:53.251543   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:55.251624   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:54.384562   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:54.397979   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:54.398032   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:54.431942   59162 cri.go:89] found id: ""
	I1202 12:51:54.431965   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.431973   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:54.431979   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:54.432024   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:54.466033   59162 cri.go:89] found id: ""
	I1202 12:51:54.466054   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.466062   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:54.466067   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:54.466116   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:54.506462   59162 cri.go:89] found id: ""
	I1202 12:51:54.506486   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.506493   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:54.506499   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:54.506545   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:54.539966   59162 cri.go:89] found id: ""
	I1202 12:51:54.539996   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.540006   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:54.540013   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:54.540068   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:54.572987   59162 cri.go:89] found id: ""
	I1202 12:51:54.573027   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.573038   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:54.573046   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:54.573107   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:54.609495   59162 cri.go:89] found id: ""
	I1202 12:51:54.609528   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.609539   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:54.609547   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:54.609593   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:54.643109   59162 cri.go:89] found id: ""
	I1202 12:51:54.643136   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.643148   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:54.643205   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:54.643279   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:54.681113   59162 cri.go:89] found id: ""
	I1202 12:51:54.681151   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.681160   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:54.681168   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:54.681180   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:54.734777   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:54.734806   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:54.748171   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:54.748196   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:54.821609   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:54.821628   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:54.821642   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:54.900306   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:54.900339   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:57.438971   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:57.454128   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:57.454187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:57.489852   59162 cri.go:89] found id: ""
	I1202 12:51:57.489877   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.489885   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:57.489890   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:57.489938   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:57.523496   59162 cri.go:89] found id: ""
	I1202 12:51:57.523515   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.523522   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:57.523528   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:57.523576   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:57.554394   59162 cri.go:89] found id: ""
	I1202 12:51:57.554417   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.554429   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:57.554436   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:57.554497   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:57.586259   59162 cri.go:89] found id: ""
	I1202 12:51:57.586281   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.586291   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:57.586298   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:57.586353   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:57.618406   59162 cri.go:89] found id: ""
	I1202 12:51:57.618427   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.618435   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:57.618440   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:57.618482   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:57.649491   59162 cri.go:89] found id: ""
	I1202 12:51:57.649517   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.649527   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:57.649532   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:57.649575   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:57.682286   59162 cri.go:89] found id: ""
	I1202 12:51:57.682306   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.682313   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:57.682319   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:57.682364   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:57.720929   59162 cri.go:89] found id: ""
	I1202 12:51:57.720956   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.720967   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:57.720977   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:57.720987   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:57.802270   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:57.802302   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:57.841214   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:57.841246   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:57.893691   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:57.893724   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:57.906616   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:57.906640   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:57.973328   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:54.153852   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:56.653113   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:54.105934   61173 machine.go:93] provisionDockerMachine start ...
	I1202 12:51:54.105950   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.106120   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:51:54.108454   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:51:54.108866   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:48:33 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:51:54.108899   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:51:54.109032   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:51:54.109170   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:51:54.109328   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:51:54.109487   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:51:54.109662   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:51:54.109863   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:51:54.109875   61173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:51:57.012461   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:51:57.751276   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.250936   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.473500   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:00.487912   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:00.487973   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:00.526513   59162 cri.go:89] found id: ""
	I1202 12:52:00.526539   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.526548   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:00.526557   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:00.526620   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:00.561483   59162 cri.go:89] found id: ""
	I1202 12:52:00.561511   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.561519   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:00.561526   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:00.561583   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:00.592435   59162 cri.go:89] found id: ""
	I1202 12:52:00.592473   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.592484   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:00.592491   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:00.592551   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:00.624686   59162 cri.go:89] found id: ""
	I1202 12:52:00.624710   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.624722   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:00.624727   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:00.624771   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:00.662610   59162 cri.go:89] found id: ""
	I1202 12:52:00.662639   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.662650   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:00.662657   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:00.662721   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:00.695972   59162 cri.go:89] found id: ""
	I1202 12:52:00.695993   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.696000   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:00.696006   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:00.696048   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:00.727200   59162 cri.go:89] found id: ""
	I1202 12:52:00.727230   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.727253   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:00.727261   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:00.727316   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:00.761510   59162 cri.go:89] found id: ""
	I1202 12:52:00.761536   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.761545   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:00.761556   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:00.761568   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:00.812287   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:00.812318   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:00.825282   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:00.825309   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:00.894016   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:00.894042   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:00.894065   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:00.972001   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:00.972034   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:59.152373   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:01.153532   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:03.653266   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.084529   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:02.751465   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:04.752349   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:03.512982   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:03.528814   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:03.528884   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:03.564137   59162 cri.go:89] found id: ""
	I1202 12:52:03.564159   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.564166   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:03.564173   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:03.564223   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:03.608780   59162 cri.go:89] found id: ""
	I1202 12:52:03.608811   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.608822   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:03.608829   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:03.608891   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:03.644906   59162 cri.go:89] found id: ""
	I1202 12:52:03.644943   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.644954   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:03.644978   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:03.645052   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:03.676732   59162 cri.go:89] found id: ""
	I1202 12:52:03.676754   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.676761   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:03.676767   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:03.676809   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:03.711338   59162 cri.go:89] found id: ""
	I1202 12:52:03.711362   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.711369   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:03.711375   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:03.711424   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:03.743657   59162 cri.go:89] found id: ""
	I1202 12:52:03.743682   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.743689   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:03.743694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:03.743737   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:03.777740   59162 cri.go:89] found id: ""
	I1202 12:52:03.777759   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.777766   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:03.777772   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:03.777818   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:03.811145   59162 cri.go:89] found id: ""
	I1202 12:52:03.811169   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.811179   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:03.811190   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:03.811204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:03.862069   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:03.862093   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:03.875133   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:03.875164   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:03.947077   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:03.947102   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:03.947114   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:04.023458   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:04.023487   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:06.562323   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:06.577498   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:06.577556   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:06.613937   59162 cri.go:89] found id: ""
	I1202 12:52:06.613962   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.613970   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:06.613976   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:06.614023   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:06.647630   59162 cri.go:89] found id: ""
	I1202 12:52:06.647655   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.647662   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:06.647667   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:06.647711   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:06.683758   59162 cri.go:89] found id: ""
	I1202 12:52:06.683783   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.683793   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:06.683800   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:06.683861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:06.722664   59162 cri.go:89] found id: ""
	I1202 12:52:06.722686   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.722694   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:06.722699   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:06.722747   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:06.756255   59162 cri.go:89] found id: ""
	I1202 12:52:06.756280   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.756290   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:06.756296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:06.756340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:06.792350   59162 cri.go:89] found id: ""
	I1202 12:52:06.792376   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.792387   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:06.792394   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:06.792450   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:06.827259   59162 cri.go:89] found id: ""
	I1202 12:52:06.827289   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.827301   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:06.827308   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:06.827367   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:06.858775   59162 cri.go:89] found id: ""
	I1202 12:52:06.858795   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.858802   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:06.858811   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:06.858821   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:06.911764   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:06.911795   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:06.925297   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:06.925326   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:06.993703   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:06.993730   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:06.993744   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:07.073657   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:07.073685   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:05.653526   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:08.152177   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:06.164438   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:07.251496   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.752479   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.611640   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:09.626141   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:09.626199   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:09.661406   59162 cri.go:89] found id: ""
	I1202 12:52:09.661425   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.661432   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:09.661439   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:09.661498   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:09.698145   59162 cri.go:89] found id: ""
	I1202 12:52:09.698173   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.698184   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:09.698191   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:09.698252   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:09.732150   59162 cri.go:89] found id: ""
	I1202 12:52:09.732178   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.732189   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:09.732197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:09.732261   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:09.768040   59162 cri.go:89] found id: ""
	I1202 12:52:09.768063   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.768070   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:09.768076   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:09.768130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:09.801038   59162 cri.go:89] found id: ""
	I1202 12:52:09.801064   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.801075   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:09.801082   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:09.801130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:09.841058   59162 cri.go:89] found id: ""
	I1202 12:52:09.841082   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.841089   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:09.841095   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:09.841137   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:09.885521   59162 cri.go:89] found id: ""
	I1202 12:52:09.885541   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.885548   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:09.885554   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:09.885602   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:09.924759   59162 cri.go:89] found id: ""
	I1202 12:52:09.924779   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.924786   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:09.924793   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:09.924804   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:09.968241   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:09.968273   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:10.020282   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:10.020315   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:10.036491   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:10.036519   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:10.113297   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:10.113324   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:10.113339   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:12.688410   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:12.705296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:12.705356   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:12.743097   59162 cri.go:89] found id: ""
	I1202 12:52:12.743119   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.743127   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:12.743133   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:12.743187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:12.778272   59162 cri.go:89] found id: ""
	I1202 12:52:12.778292   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.778299   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:12.778304   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:12.778365   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:12.816087   59162 cri.go:89] found id: ""
	I1202 12:52:12.816116   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.816127   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:12.816134   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:12.816187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:12.850192   59162 cri.go:89] found id: ""
	I1202 12:52:12.850214   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.850221   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:12.850227   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:12.850282   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:12.883325   59162 cri.go:89] found id: ""
	I1202 12:52:12.883351   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.883360   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:12.883367   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:12.883427   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:12.916121   59162 cri.go:89] found id: ""
	I1202 12:52:12.916157   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.916169   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:12.916176   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:12.916251   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:12.946704   59162 cri.go:89] found id: ""
	I1202 12:52:12.946733   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.946746   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:12.946753   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:12.946802   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:12.979010   59162 cri.go:89] found id: ""
	I1202 12:52:12.979041   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.979050   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:12.979062   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:12.979075   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:13.062141   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:13.062171   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:13.111866   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:13.111900   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:13.162470   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:13.162498   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:13.178497   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:13.178525   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:13.245199   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:10.152556   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:12.153087   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.236522   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:12.249938   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:14.750814   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:15.746327   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:15.760092   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:15.760160   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:15.797460   59162 cri.go:89] found id: ""
	I1202 12:52:15.797484   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.797495   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:15.797503   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:15.797563   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:15.829969   59162 cri.go:89] found id: ""
	I1202 12:52:15.829998   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.830009   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:15.830017   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:15.830072   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:15.862390   59162 cri.go:89] found id: ""
	I1202 12:52:15.862418   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.862428   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:15.862435   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:15.862484   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:15.895223   59162 cri.go:89] found id: ""
	I1202 12:52:15.895244   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.895251   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:15.895257   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:15.895311   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:15.933157   59162 cri.go:89] found id: ""
	I1202 12:52:15.933184   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.933192   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:15.933197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:15.933245   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:15.964387   59162 cri.go:89] found id: ""
	I1202 12:52:15.964414   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.964425   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:15.964433   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:15.964487   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:15.996803   59162 cri.go:89] found id: ""
	I1202 12:52:15.996825   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.996832   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:15.996837   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:15.996881   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:16.029364   59162 cri.go:89] found id: ""
	I1202 12:52:16.029394   59162 logs.go:282] 0 containers: []
	W1202 12:52:16.029402   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:16.029411   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:16.029422   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:16.098237   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:16.098264   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:16.098278   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:16.172386   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:16.172414   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:16.216899   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:16.216923   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:16.281565   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:16.281591   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:14.154258   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:16.652807   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:15.316450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:18.388460   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:16.751794   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:19.250295   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:18.796337   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:18.809573   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:18.809637   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:18.847965   59162 cri.go:89] found id: ""
	I1202 12:52:18.847991   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.847999   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:18.848004   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:18.848053   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:18.883714   59162 cri.go:89] found id: ""
	I1202 12:52:18.883741   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.883751   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:18.883758   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:18.883817   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:18.918581   59162 cri.go:89] found id: ""
	I1202 12:52:18.918605   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.918612   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:18.918617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:18.918672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:18.954394   59162 cri.go:89] found id: ""
	I1202 12:52:18.954426   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.954437   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:18.954443   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:18.954502   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:18.995321   59162 cri.go:89] found id: ""
	I1202 12:52:18.995347   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.995355   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:18.995361   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:18.995423   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:19.034030   59162 cri.go:89] found id: ""
	I1202 12:52:19.034055   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.034066   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:19.034073   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:19.034130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:19.073569   59162 cri.go:89] found id: ""
	I1202 12:52:19.073597   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.073609   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:19.073615   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:19.073662   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:19.112049   59162 cri.go:89] found id: ""
	I1202 12:52:19.112078   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.112090   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:19.112100   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:19.112113   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:19.180480   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:19.180502   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:19.180516   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:19.258236   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:19.258264   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:19.299035   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:19.299053   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:19.352572   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:19.352602   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:21.866524   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:21.879286   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:21.879340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:21.910463   59162 cri.go:89] found id: ""
	I1202 12:52:21.910489   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.910498   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:21.910504   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:21.910551   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:21.943130   59162 cri.go:89] found id: ""
	I1202 12:52:21.943157   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.943165   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:21.943171   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:21.943216   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:21.976969   59162 cri.go:89] found id: ""
	I1202 12:52:21.976990   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.976997   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:21.977002   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:21.977055   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:22.022113   59162 cri.go:89] found id: ""
	I1202 12:52:22.022144   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.022153   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:22.022159   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:22.022218   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:22.057387   59162 cri.go:89] found id: ""
	I1202 12:52:22.057406   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.057413   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:22.057418   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:22.057459   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:22.089832   59162 cri.go:89] found id: ""
	I1202 12:52:22.089866   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.089892   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:22.089900   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:22.089960   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:22.121703   59162 cri.go:89] found id: ""
	I1202 12:52:22.121727   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.121735   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:22.121740   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:22.121789   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:22.155076   59162 cri.go:89] found id: ""
	I1202 12:52:22.155098   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.155108   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:22.155117   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:22.155137   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:22.234831   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:22.234862   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:22.273912   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:22.273945   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:22.327932   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:22.327966   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:22.340890   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:22.340913   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:22.419371   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:19.153845   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:21.652993   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:23.653111   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:21.750980   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:24.250791   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:24.919868   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:24.935004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:24.935068   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:24.972438   59162 cri.go:89] found id: ""
	I1202 12:52:24.972466   59162 logs.go:282] 0 containers: []
	W1202 12:52:24.972474   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:24.972480   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:24.972525   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:25.009282   59162 cri.go:89] found id: ""
	I1202 12:52:25.009310   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.009320   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:25.009329   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:25.009391   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:25.043227   59162 cri.go:89] found id: ""
	I1202 12:52:25.043254   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.043262   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:25.043267   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:25.043318   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:25.079167   59162 cri.go:89] found id: ""
	I1202 12:52:25.079191   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.079198   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:25.079204   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:25.079263   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:25.110308   59162 cri.go:89] found id: ""
	I1202 12:52:25.110332   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.110340   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:25.110346   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:25.110388   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:25.143804   59162 cri.go:89] found id: ""
	I1202 12:52:25.143830   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.143840   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:25.143846   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:25.143903   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:25.178114   59162 cri.go:89] found id: ""
	I1202 12:52:25.178140   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.178147   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:25.178155   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:25.178204   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:25.212632   59162 cri.go:89] found id: ""
	I1202 12:52:25.212665   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.212675   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:25.212684   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:25.212696   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:25.267733   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:25.267761   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:25.281025   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:25.281048   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:25.346497   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:25.346520   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:25.346531   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:25.437435   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:25.437469   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:27.979493   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:27.993542   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:27.993615   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:28.030681   59162 cri.go:89] found id: ""
	I1202 12:52:28.030705   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.030712   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:28.030718   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:28.030771   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:28.063991   59162 cri.go:89] found id: ""
	I1202 12:52:28.064019   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.064027   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:28.064032   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:28.064080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:28.097983   59162 cri.go:89] found id: ""
	I1202 12:52:28.098018   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.098029   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:28.098038   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:28.098098   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:28.131956   59162 cri.go:89] found id: ""
	I1202 12:52:28.131977   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.131987   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:28.131995   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:28.132071   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:28.170124   59162 cri.go:89] found id: ""
	I1202 12:52:28.170160   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.170171   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:28.170177   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:28.170238   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:28.203127   59162 cri.go:89] found id: ""
	I1202 12:52:28.203149   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.203157   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:28.203163   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:28.203216   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:28.240056   59162 cri.go:89] found id: ""
	I1202 12:52:28.240081   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.240088   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:28.240094   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:28.240142   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:28.276673   59162 cri.go:89] found id: ""
	I1202 12:52:28.276699   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.276710   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:28.276720   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:28.276733   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:28.333435   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:28.333470   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:28.347465   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:28.347491   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:52:26.153244   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:28.153689   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:27.508437   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:26.250897   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:28.250951   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:30.252183   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	W1202 12:52:28.432745   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:28.432777   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:28.432792   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:28.515984   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:28.516017   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:31.057069   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:31.070021   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:31.070084   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:31.106501   59162 cri.go:89] found id: ""
	I1202 12:52:31.106530   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.106540   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:31.106547   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:31.106606   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:31.141190   59162 cri.go:89] found id: ""
	I1202 12:52:31.141219   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.141230   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:31.141238   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:31.141298   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:31.176050   59162 cri.go:89] found id: ""
	I1202 12:52:31.176077   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.176087   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:31.176099   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:31.176169   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:31.211740   59162 cri.go:89] found id: ""
	I1202 12:52:31.211769   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.211780   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:31.211786   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:31.211831   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:31.248949   59162 cri.go:89] found id: ""
	I1202 12:52:31.248974   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.248983   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:31.248990   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:31.249044   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:31.284687   59162 cri.go:89] found id: ""
	I1202 12:52:31.284709   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.284717   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:31.284723   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:31.284765   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:31.317972   59162 cri.go:89] found id: ""
	I1202 12:52:31.317997   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.318004   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:31.318010   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:31.318065   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:31.354866   59162 cri.go:89] found id: ""
	I1202 12:52:31.354893   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.354904   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:31.354914   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:31.354927   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:31.425168   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:31.425191   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:31.425202   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:31.508169   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:31.508204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:31.547193   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:31.547220   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:31.601864   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:31.601892   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:30.653415   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:33.153132   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:30.580471   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:32.752026   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:35.251960   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:34.115652   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:34.131644   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:34.131695   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:34.174473   59162 cri.go:89] found id: ""
	I1202 12:52:34.174500   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.174510   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:34.174518   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:34.174571   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:34.226162   59162 cri.go:89] found id: ""
	I1202 12:52:34.226190   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.226201   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:34.226208   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:34.226271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:34.269202   59162 cri.go:89] found id: ""
	I1202 12:52:34.269230   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.269240   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:34.269248   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:34.269327   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:34.304571   59162 cri.go:89] found id: ""
	I1202 12:52:34.304604   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.304615   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:34.304621   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:34.304670   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:34.339285   59162 cri.go:89] found id: ""
	I1202 12:52:34.339316   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.339327   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:34.339334   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:34.339401   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:34.374919   59162 cri.go:89] found id: ""
	I1202 12:52:34.374952   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.374964   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:34.374973   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:34.375035   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:34.409292   59162 cri.go:89] found id: ""
	I1202 12:52:34.409319   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.409330   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:34.409337   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:34.409404   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:34.442536   59162 cri.go:89] found id: ""
	I1202 12:52:34.442561   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.442568   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:34.442576   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:34.442587   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:34.494551   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:34.494582   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:34.508684   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:34.508713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:34.572790   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:34.572816   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:34.572835   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:34.649327   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:34.649358   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:37.190648   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:37.203913   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:37.203966   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:37.243165   59162 cri.go:89] found id: ""
	I1202 12:52:37.243186   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.243194   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:37.243199   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:37.243246   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:37.279317   59162 cri.go:89] found id: ""
	I1202 12:52:37.279343   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.279351   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:37.279356   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:37.279411   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:37.312655   59162 cri.go:89] found id: ""
	I1202 12:52:37.312684   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.312693   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:37.312702   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:37.312748   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:37.346291   59162 cri.go:89] found id: ""
	I1202 12:52:37.346319   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.346328   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:37.346334   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:37.346382   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:37.381534   59162 cri.go:89] found id: ""
	I1202 12:52:37.381555   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.381563   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:37.381569   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:37.381621   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:37.416990   59162 cri.go:89] found id: ""
	I1202 12:52:37.417013   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.417020   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:37.417026   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:37.417083   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:37.451149   59162 cri.go:89] found id: ""
	I1202 12:52:37.451174   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.451182   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:37.451187   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:37.451233   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:37.485902   59162 cri.go:89] found id: ""
	I1202 12:52:37.485929   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.485940   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:37.485950   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:37.485970   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:37.541615   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:37.541645   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:37.554846   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:37.554866   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:37.622432   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:37.622457   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:37.622471   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:37.708793   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:37.708832   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:35.154170   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:37.653220   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:36.660437   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:37.751726   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:40.252016   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:40.246822   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:40.260893   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:40.260959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:40.294743   59162 cri.go:89] found id: ""
	I1202 12:52:40.294773   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.294782   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:40.294789   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:40.294845   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:40.338523   59162 cri.go:89] found id: ""
	I1202 12:52:40.338557   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.338570   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:40.338577   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:40.338628   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:40.373134   59162 cri.go:89] found id: ""
	I1202 12:52:40.373162   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.373170   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:40.373176   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:40.373225   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:40.410197   59162 cri.go:89] found id: ""
	I1202 12:52:40.410233   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.410247   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:40.410256   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:40.410333   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:40.442497   59162 cri.go:89] found id: ""
	I1202 12:52:40.442521   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.442530   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:40.442536   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:40.442597   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:40.477835   59162 cri.go:89] found id: ""
	I1202 12:52:40.477863   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.477872   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:40.477879   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:40.477936   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:40.511523   59162 cri.go:89] found id: ""
	I1202 12:52:40.511547   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.511559   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:40.511567   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:40.511628   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:40.545902   59162 cri.go:89] found id: ""
	I1202 12:52:40.545928   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.545942   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:40.545962   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:40.545976   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:40.595638   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:40.595669   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:40.609023   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:40.609043   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:40.680826   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:40.680848   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:40.680866   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:40.756551   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:40.756579   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:43.295761   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:43.308764   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:43.308836   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:43.343229   59162 cri.go:89] found id: ""
	I1202 12:52:43.343258   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.343268   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:43.343276   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:43.343335   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:39.653604   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:42.152871   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:39.732455   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:42.750873   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:45.250740   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:43.376841   59162 cri.go:89] found id: ""
	I1202 12:52:43.376861   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.376868   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:43.376874   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:43.376918   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:43.415013   59162 cri.go:89] found id: ""
	I1202 12:52:43.415033   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.415041   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:43.415046   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:43.415094   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:43.451563   59162 cri.go:89] found id: ""
	I1202 12:52:43.451590   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.451601   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:43.451608   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:43.451658   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:43.492838   59162 cri.go:89] found id: ""
	I1202 12:52:43.492859   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.492867   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:43.492872   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:43.492934   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:43.531872   59162 cri.go:89] found id: ""
	I1202 12:52:43.531898   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.531908   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:43.531914   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:43.531957   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:43.566235   59162 cri.go:89] found id: ""
	I1202 12:52:43.566260   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.566270   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:43.566277   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:43.566332   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:43.601502   59162 cri.go:89] found id: ""
	I1202 12:52:43.601531   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.601542   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:43.601553   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:43.601567   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:43.650984   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:43.651012   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:43.664273   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:43.664296   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:43.735791   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:43.735819   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:43.735833   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:43.817824   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:43.817861   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:46.356130   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:46.368755   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:46.368835   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:46.404552   59162 cri.go:89] found id: ""
	I1202 12:52:46.404574   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.404582   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:46.404588   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:46.404640   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:46.438292   59162 cri.go:89] found id: ""
	I1202 12:52:46.438318   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.438329   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:46.438337   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:46.438397   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:46.471614   59162 cri.go:89] found id: ""
	I1202 12:52:46.471636   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.471643   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:46.471649   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:46.471752   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:46.502171   59162 cri.go:89] found id: ""
	I1202 12:52:46.502193   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.502201   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:46.502207   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:46.502250   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:46.533820   59162 cri.go:89] found id: ""
	I1202 12:52:46.533842   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.533851   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:46.533859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:46.533914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:46.566891   59162 cri.go:89] found id: ""
	I1202 12:52:46.566918   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.566928   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:46.566936   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:46.566980   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:46.599112   59162 cri.go:89] found id: ""
	I1202 12:52:46.599143   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.599154   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:46.599161   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:46.599215   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:46.630794   59162 cri.go:89] found id: ""
	I1202 12:52:46.630837   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.630849   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:46.630860   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:46.630876   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:46.644180   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:46.644210   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:46.705881   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:46.705921   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:46.705936   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:46.781327   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:46.781359   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:46.820042   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:46.820072   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:44.654330   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:47.152273   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:45.816427   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:48.884464   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:47.751118   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:49.752726   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:49.368930   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:49.381506   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:49.381556   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:49.417928   59162 cri.go:89] found id: ""
	I1202 12:52:49.417955   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.417965   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:49.417977   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:49.418034   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:49.450248   59162 cri.go:89] found id: ""
	I1202 12:52:49.450276   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.450286   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:49.450295   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:49.450366   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:49.484288   59162 cri.go:89] found id: ""
	I1202 12:52:49.484311   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.484318   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:49.484323   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:49.484372   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:49.518565   59162 cri.go:89] found id: ""
	I1202 12:52:49.518585   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.518595   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:49.518602   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:49.518650   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:49.552524   59162 cri.go:89] found id: ""
	I1202 12:52:49.552549   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.552556   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:49.552561   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:49.552609   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:49.586570   59162 cri.go:89] found id: ""
	I1202 12:52:49.586599   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.586610   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:49.586617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:49.586672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:49.622561   59162 cri.go:89] found id: ""
	I1202 12:52:49.622590   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.622601   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:49.622609   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:49.622666   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:49.659092   59162 cri.go:89] found id: ""
	I1202 12:52:49.659117   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.659129   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:49.659152   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:49.659170   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:49.672461   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:49.672491   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:49.738609   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:49.738637   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:49.738670   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:49.820458   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:49.820488   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:49.860240   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:49.860269   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:52.411571   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:52.425037   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:52.425106   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:52.458215   59162 cri.go:89] found id: ""
	I1202 12:52:52.458244   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.458255   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:52.458262   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:52.458316   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:52.491781   59162 cri.go:89] found id: ""
	I1202 12:52:52.491809   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.491820   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:52.491827   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:52.491879   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:52.528829   59162 cri.go:89] found id: ""
	I1202 12:52:52.528855   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.528864   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:52.528870   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:52.528914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:52.560930   59162 cri.go:89] found id: ""
	I1202 12:52:52.560957   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.560965   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:52.560971   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:52.561021   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:52.594102   59162 cri.go:89] found id: ""
	I1202 12:52:52.594139   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.594152   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:52.594160   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:52.594222   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:52.627428   59162 cri.go:89] found id: ""
	I1202 12:52:52.627452   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.627460   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:52.627465   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:52.627529   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:52.659143   59162 cri.go:89] found id: ""
	I1202 12:52:52.659167   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.659175   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:52.659180   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:52.659230   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:52.691603   59162 cri.go:89] found id: ""
	I1202 12:52:52.691625   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.691632   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:52.691640   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:52.691651   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:52.741989   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:52.742016   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:52.755769   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:52.755790   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:52.826397   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:52.826418   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:52.826431   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:52.904705   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:52.904734   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:49.653476   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:52.152372   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:51.755127   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:54.252182   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:55.449363   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:55.462294   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:55.462350   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:55.500829   59162 cri.go:89] found id: ""
	I1202 12:52:55.500856   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.500865   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:55.500871   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:55.500927   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:55.533890   59162 cri.go:89] found id: ""
	I1202 12:52:55.533920   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.533931   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:55.533942   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:55.533998   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:55.566686   59162 cri.go:89] found id: ""
	I1202 12:52:55.566715   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.566725   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:55.566736   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:55.566790   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:55.598330   59162 cri.go:89] found id: ""
	I1202 12:52:55.598357   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.598367   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:55.598374   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:55.598429   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:55.630648   59162 cri.go:89] found id: ""
	I1202 12:52:55.630676   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.630686   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:55.630694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:55.630755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:55.664611   59162 cri.go:89] found id: ""
	I1202 12:52:55.664633   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.664640   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:55.664645   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:55.664687   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:55.697762   59162 cri.go:89] found id: ""
	I1202 12:52:55.697789   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.697797   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:55.697803   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:55.697853   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:55.735239   59162 cri.go:89] found id: ""
	I1202 12:52:55.735263   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.735271   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:55.735279   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:55.735292   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:55.805187   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:55.805217   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:55.805233   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:55.888420   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:55.888452   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:55.927535   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:55.927561   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:55.976883   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:55.976909   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:54.152753   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:56.154364   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.654202   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:54.968436   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:58.036631   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:56.750816   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.752427   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.490700   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:58.504983   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:58.505053   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:58.541332   59162 cri.go:89] found id: ""
	I1202 12:52:58.541352   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.541359   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:58.541365   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:58.541409   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:58.579437   59162 cri.go:89] found id: ""
	I1202 12:52:58.579459   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.579466   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:58.579472   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:58.579521   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:58.617374   59162 cri.go:89] found id: ""
	I1202 12:52:58.617406   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.617417   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:58.617425   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:58.617486   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:58.653242   59162 cri.go:89] found id: ""
	I1202 12:52:58.653269   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.653280   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:58.653287   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:58.653345   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:58.686171   59162 cri.go:89] found id: ""
	I1202 12:52:58.686201   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.686210   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:58.686215   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:58.686262   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:58.719934   59162 cri.go:89] found id: ""
	I1202 12:52:58.719956   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.719966   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:58.719974   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:58.720030   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:58.759587   59162 cri.go:89] found id: ""
	I1202 12:52:58.759610   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.759619   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:58.759626   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:58.759678   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:58.790885   59162 cri.go:89] found id: ""
	I1202 12:52:58.790908   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.790915   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:58.790922   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:58.790934   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:58.840192   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:58.840220   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:58.853639   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:58.853663   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:58.924643   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:58.924669   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:58.924679   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:59.013916   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:59.013945   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:01.552305   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:01.565577   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:01.565642   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:01.598261   59162 cri.go:89] found id: ""
	I1202 12:53:01.598294   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.598304   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:01.598310   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:01.598377   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:01.631527   59162 cri.go:89] found id: ""
	I1202 12:53:01.631556   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.631565   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:01.631570   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:01.631631   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:01.670788   59162 cri.go:89] found id: ""
	I1202 12:53:01.670812   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.670820   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:01.670826   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:01.670880   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:01.708801   59162 cri.go:89] found id: ""
	I1202 12:53:01.708828   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.708838   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:01.708846   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:01.708914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:01.746053   59162 cri.go:89] found id: ""
	I1202 12:53:01.746074   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.746083   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:01.746120   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:01.746184   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:01.780873   59162 cri.go:89] found id: ""
	I1202 12:53:01.780894   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.780901   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:01.780907   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:01.780951   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:01.817234   59162 cri.go:89] found id: ""
	I1202 12:53:01.817259   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.817269   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:01.817276   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:01.817335   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:01.850277   59162 cri.go:89] found id: ""
	I1202 12:53:01.850302   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.850317   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:01.850327   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:01.850342   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:01.933014   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:01.933055   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:01.971533   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:01.971562   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:02.020280   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:02.020311   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:02.034786   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:02.034814   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:02.104013   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:01.152305   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:03.153925   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:01.250308   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:03.250937   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:05.751259   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:04.604595   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:04.618004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:04.618057   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:04.651388   59162 cri.go:89] found id: ""
	I1202 12:53:04.651414   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.651428   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:04.651436   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:04.651495   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:04.686973   59162 cri.go:89] found id: ""
	I1202 12:53:04.686998   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.687005   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:04.687019   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:04.687063   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:04.720630   59162 cri.go:89] found id: ""
	I1202 12:53:04.720654   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.720661   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:04.720667   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:04.720724   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:04.754657   59162 cri.go:89] found id: ""
	I1202 12:53:04.754682   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.754689   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:04.754694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:04.754746   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:04.787583   59162 cri.go:89] found id: ""
	I1202 12:53:04.787611   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.787621   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:04.787628   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:04.787686   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:04.818962   59162 cri.go:89] found id: ""
	I1202 12:53:04.818988   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.818999   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:04.819006   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:04.819059   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:04.852015   59162 cri.go:89] found id: ""
	I1202 12:53:04.852035   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.852042   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:04.852047   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:04.852097   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:04.886272   59162 cri.go:89] found id: ""
	I1202 12:53:04.886294   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.886301   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:04.886309   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:04.886320   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:04.934682   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:04.934712   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:04.947889   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:04.947911   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:05.018970   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:05.018995   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:05.019010   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:05.098203   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:05.098233   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:07.637320   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:07.650643   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:07.650706   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:07.683468   59162 cri.go:89] found id: ""
	I1202 12:53:07.683491   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.683499   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:07.683504   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:07.683565   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:07.719765   59162 cri.go:89] found id: ""
	I1202 12:53:07.719792   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.719799   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:07.719805   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:07.719855   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:07.760939   59162 cri.go:89] found id: ""
	I1202 12:53:07.760986   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.760996   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:07.761004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:07.761066   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:07.799175   59162 cri.go:89] found id: ""
	I1202 12:53:07.799219   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.799231   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:07.799239   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:07.799300   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:07.831957   59162 cri.go:89] found id: ""
	I1202 12:53:07.831987   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.831999   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:07.832007   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:07.832067   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:07.865982   59162 cri.go:89] found id: ""
	I1202 12:53:07.866008   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.866015   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:07.866022   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:07.866080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:07.903443   59162 cri.go:89] found id: ""
	I1202 12:53:07.903467   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.903477   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:07.903484   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:07.903541   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:07.939268   59162 cri.go:89] found id: ""
	I1202 12:53:07.939293   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.939300   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:07.939310   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:07.939324   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:07.952959   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:07.952984   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:08.039178   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:08.039207   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:08.039223   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:08.121432   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:08.121469   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:08.164739   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:08.164767   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:05.652537   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:07.652894   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:04.116377   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:07.188477   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:08.250489   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:10.250657   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:10.718599   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:10.731079   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:10.731154   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:10.767605   59162 cri.go:89] found id: ""
	I1202 12:53:10.767626   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.767633   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:10.767639   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:10.767689   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:10.800464   59162 cri.go:89] found id: ""
	I1202 12:53:10.800483   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.800491   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:10.800496   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:10.800554   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:10.840808   59162 cri.go:89] found id: ""
	I1202 12:53:10.840836   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.840853   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:10.840859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:10.840922   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:10.877653   59162 cri.go:89] found id: ""
	I1202 12:53:10.877681   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.877690   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:10.877698   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:10.877755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:10.915849   59162 cri.go:89] found id: ""
	I1202 12:53:10.915873   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.915883   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:10.915891   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:10.915953   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:10.948652   59162 cri.go:89] found id: ""
	I1202 12:53:10.948680   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.948691   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:10.948697   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:10.948755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:10.983126   59162 cri.go:89] found id: ""
	I1202 12:53:10.983154   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.983165   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:10.983172   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:10.983232   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:11.015350   59162 cri.go:89] found id: ""
	I1202 12:53:11.015378   59162 logs.go:282] 0 containers: []
	W1202 12:53:11.015390   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:11.015400   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:11.015414   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:11.028713   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:11.028737   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:11.095904   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:11.095932   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:11.095950   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:11.179078   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:11.179114   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:11.216075   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:11.216106   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:09.653482   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:12.152117   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:13.272450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:12.750358   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:14.751316   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:13.774975   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:13.787745   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:13.787804   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:13.821793   59162 cri.go:89] found id: ""
	I1202 12:53:13.821824   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.821834   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:13.821840   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:13.821885   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:13.854831   59162 cri.go:89] found id: ""
	I1202 12:53:13.854855   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.854864   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:13.854871   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:13.854925   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:13.885113   59162 cri.go:89] found id: ""
	I1202 12:53:13.885142   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.885149   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:13.885155   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:13.885201   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:13.915811   59162 cri.go:89] found id: ""
	I1202 12:53:13.915841   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.915851   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:13.915859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:13.915914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:13.948908   59162 cri.go:89] found id: ""
	I1202 12:53:13.948936   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.948946   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:13.948953   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:13.949016   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:13.986502   59162 cri.go:89] found id: ""
	I1202 12:53:13.986531   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.986540   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:13.986548   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:13.986607   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:14.018182   59162 cri.go:89] found id: ""
	I1202 12:53:14.018210   59162 logs.go:282] 0 containers: []
	W1202 12:53:14.018221   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:14.018229   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:14.018287   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:14.054185   59162 cri.go:89] found id: ""
	I1202 12:53:14.054221   59162 logs.go:282] 0 containers: []
	W1202 12:53:14.054233   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:14.054244   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:14.054272   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:14.131353   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:14.131381   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:14.131402   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:14.212787   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:14.212822   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:14.254043   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:14.254073   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:14.309591   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:14.309620   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:16.824827   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:16.838150   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:16.838210   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:16.871550   59162 cri.go:89] found id: ""
	I1202 12:53:16.871570   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.871577   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:16.871582   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:16.871625   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:16.908736   59162 cri.go:89] found id: ""
	I1202 12:53:16.908766   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.908775   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:16.908781   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:16.908844   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:16.941404   59162 cri.go:89] found id: ""
	I1202 12:53:16.941427   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.941437   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:16.941444   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:16.941500   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:16.971984   59162 cri.go:89] found id: ""
	I1202 12:53:16.972011   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.972023   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:16.972030   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:16.972079   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:17.004573   59162 cri.go:89] found id: ""
	I1202 12:53:17.004596   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.004607   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:17.004614   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:17.004661   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:17.037171   59162 cri.go:89] found id: ""
	I1202 12:53:17.037199   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.037210   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:17.037218   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:17.037271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:17.070862   59162 cri.go:89] found id: ""
	I1202 12:53:17.070888   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.070899   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:17.070906   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:17.070959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:17.102642   59162 cri.go:89] found id: ""
	I1202 12:53:17.102668   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.102678   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:17.102688   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:17.102701   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:17.182590   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:17.182623   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:17.224313   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:17.224346   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:17.272831   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:17.272855   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:17.286217   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:17.286240   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:17.357274   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:14.153570   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:16.651955   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:18.654103   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:16.340429   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:17.252036   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:19.751295   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:19.858294   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:19.871731   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:19.871787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:19.906270   59162 cri.go:89] found id: ""
	I1202 12:53:19.906290   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.906297   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:19.906303   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:19.906345   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:19.937769   59162 cri.go:89] found id: ""
	I1202 12:53:19.937790   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.937797   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:19.937802   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:19.937845   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:19.971667   59162 cri.go:89] found id: ""
	I1202 12:53:19.971689   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.971706   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:19.971714   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:19.971787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:20.005434   59162 cri.go:89] found id: ""
	I1202 12:53:20.005455   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.005461   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:20.005467   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:20.005512   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:20.041817   59162 cri.go:89] found id: ""
	I1202 12:53:20.041839   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.041848   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:20.041856   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:20.041906   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:20.073923   59162 cri.go:89] found id: ""
	I1202 12:53:20.073946   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.073958   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:20.073966   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:20.074026   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:20.107360   59162 cri.go:89] found id: ""
	I1202 12:53:20.107398   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.107409   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:20.107416   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:20.107479   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:20.153919   59162 cri.go:89] found id: ""
	I1202 12:53:20.153942   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.153952   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:20.153963   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:20.153977   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:20.211581   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:20.211610   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:20.227589   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:20.227615   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:20.305225   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:20.305250   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:20.305265   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:20.382674   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:20.382713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:22.924662   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:22.940038   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:22.940101   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:22.984768   59162 cri.go:89] found id: ""
	I1202 12:53:22.984795   59162 logs.go:282] 0 containers: []
	W1202 12:53:22.984806   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:22.984815   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:22.984876   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:23.024159   59162 cri.go:89] found id: ""
	I1202 12:53:23.024180   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.024188   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:23.024194   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:23.024254   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:23.059929   59162 cri.go:89] found id: ""
	I1202 12:53:23.059948   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.059956   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:23.059961   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:23.060003   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:23.093606   59162 cri.go:89] found id: ""
	I1202 12:53:23.093627   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.093633   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:23.093639   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:23.093689   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:23.127868   59162 cri.go:89] found id: ""
	I1202 12:53:23.127893   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.127904   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:23.127910   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:23.127965   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:23.164988   59162 cri.go:89] found id: ""
	I1202 12:53:23.165006   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.165013   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:23.165018   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:23.165058   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:23.196389   59162 cri.go:89] found id: ""
	I1202 12:53:23.196412   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.196423   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:23.196430   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:23.196481   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:23.229337   59162 cri.go:89] found id: ""
	I1202 12:53:23.229358   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.229366   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:23.229376   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:23.229404   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:23.284041   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:23.284066   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:23.297861   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:23.297884   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:53:21.152126   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:23.154090   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:22.420399   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:22.250790   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:24.252122   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	W1202 12:53:23.364113   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:23.364131   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:23.364142   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:23.446244   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:23.446273   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:25.986668   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:25.998953   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:25.999013   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:26.034844   59162 cri.go:89] found id: ""
	I1202 12:53:26.034868   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.034876   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:26.034883   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:26.034938   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:26.067050   59162 cri.go:89] found id: ""
	I1202 12:53:26.067076   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.067083   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:26.067089   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:26.067152   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:26.098705   59162 cri.go:89] found id: ""
	I1202 12:53:26.098735   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.098746   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:26.098754   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:26.098812   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:26.131283   59162 cri.go:89] found id: ""
	I1202 12:53:26.131312   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.131321   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:26.131327   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:26.131379   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:26.164905   59162 cri.go:89] found id: ""
	I1202 12:53:26.164933   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.164943   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:26.164950   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:26.165009   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:26.196691   59162 cri.go:89] found id: ""
	I1202 12:53:26.196715   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.196724   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:26.196732   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:26.196789   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:26.227341   59162 cri.go:89] found id: ""
	I1202 12:53:26.227364   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.227374   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:26.227380   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:26.227436   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:26.260569   59162 cri.go:89] found id: ""
	I1202 12:53:26.260589   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.260597   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:26.260606   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:26.260619   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:26.313150   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:26.313175   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:26.327732   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:26.327762   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:26.392748   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:26.392768   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:26.392778   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:26.474456   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:26.474484   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:24.146771   58902 pod_ready.go:82] duration metric: took 4m0.000100995s for pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace to be "Ready" ...
	E1202 12:53:24.146796   58902 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace to be "Ready" (will not retry!)
	I1202 12:53:24.146811   58902 pod_ready.go:39] duration metric: took 4m6.027386938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:53:24.146852   58902 kubeadm.go:597] duration metric: took 4m15.570212206s to restartPrimaryControlPlane
	W1202 12:53:24.146901   58902 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 12:53:24.146926   58902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:53:25.492478   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:26.253906   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:28.752313   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:29.018514   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:29.032328   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:29.032457   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:29.067696   59162 cri.go:89] found id: ""
	I1202 12:53:29.067720   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.067732   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:29.067738   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:29.067794   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:29.101076   59162 cri.go:89] found id: ""
	I1202 12:53:29.101096   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.101103   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:29.101108   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:29.101150   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:29.136446   59162 cri.go:89] found id: ""
	I1202 12:53:29.136473   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.136483   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:29.136489   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:29.136552   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:29.170820   59162 cri.go:89] found id: ""
	I1202 12:53:29.170849   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.170860   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:29.170868   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:29.170931   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:29.205972   59162 cri.go:89] found id: ""
	I1202 12:53:29.206001   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.206012   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:29.206020   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:29.206086   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:29.242118   59162 cri.go:89] found id: ""
	I1202 12:53:29.242155   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.242165   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:29.242172   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:29.242222   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:29.281377   59162 cri.go:89] found id: ""
	I1202 12:53:29.281405   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.281417   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:29.281426   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:29.281487   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:29.316350   59162 cri.go:89] found id: ""
	I1202 12:53:29.316381   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.316393   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:29.316404   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:29.316418   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:29.392609   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:29.392648   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:29.430777   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:29.430804   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:29.484157   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:29.484190   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:29.498434   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:29.498457   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:29.568203   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:32.069043   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:32.081796   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:32.081867   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:32.115767   59162 cri.go:89] found id: ""
	I1202 12:53:32.115789   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.115797   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:32.115802   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:32.115861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:32.145962   59162 cri.go:89] found id: ""
	I1202 12:53:32.145984   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.145992   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:32.145999   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:32.146046   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:32.177709   59162 cri.go:89] found id: ""
	I1202 12:53:32.177734   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.177744   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:32.177752   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:32.177796   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:32.211897   59162 cri.go:89] found id: ""
	I1202 12:53:32.211921   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.211930   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:32.211937   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:32.211994   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:32.244401   59162 cri.go:89] found id: ""
	I1202 12:53:32.244425   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.244434   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:32.244442   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:32.244503   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:32.278097   59162 cri.go:89] found id: ""
	I1202 12:53:32.278123   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.278140   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:32.278151   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:32.278210   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:32.312740   59162 cri.go:89] found id: ""
	I1202 12:53:32.312774   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.312785   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:32.312793   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:32.312860   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:32.345849   59162 cri.go:89] found id: ""
	I1202 12:53:32.345878   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.345889   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:32.345901   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:32.345917   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:32.395961   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:32.395998   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:32.409582   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:32.409609   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:32.473717   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:32.473746   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:32.473763   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:32.548547   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:32.548580   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:31.572430   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:31.251492   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:33.251616   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:35.750762   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:35.088628   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:35.102152   59162 kubeadm.go:597] duration metric: took 4m2.014751799s to restartPrimaryControlPlane
	W1202 12:53:35.102217   59162 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 12:53:35.102244   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:53:36.768528   59162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.666262663s)
	I1202 12:53:36.768601   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:53:36.783104   59162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:53:36.792966   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:53:36.802188   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:53:36.802205   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:53:36.802234   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:53:36.811253   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:53:36.811290   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:53:36.820464   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:53:36.829386   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:53:36.829426   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:53:36.838814   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:53:36.847241   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:53:36.847272   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:53:36.856295   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:53:36.864892   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:53:36.864929   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:53:36.873699   59162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:53:37.076297   59162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:53:34.644489   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:38.250676   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:40.250779   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:40.724427   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:43.796493   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:42.251341   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:44.751292   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:50.547760   58902 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.400809303s)
	I1202 12:53:50.547840   58902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:53:50.564051   58902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:53:50.573674   58902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:53:50.582945   58902 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:53:50.582965   58902 kubeadm.go:157] found existing configuration files:
	
	I1202 12:53:50.582998   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:53:50.591979   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:53:50.592030   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:53:50.601043   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:53:50.609896   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:53:50.609945   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:53:50.618918   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:53:50.627599   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:53:50.627634   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:53:50.636459   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:53:50.644836   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:53:50.644880   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:53:50.653742   58902 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:53:50.698104   58902 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 12:53:50.698187   58902 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:53:50.811202   58902 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:53:50.811340   58902 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:53:50.811466   58902 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 12:53:50.822002   58902 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:53:47.252492   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:49.750168   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:50.823836   58902 out.go:235]   - Generating certificates and keys ...
	I1202 12:53:50.823933   58902 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:53:50.824031   58902 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:53:50.824141   58902 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:53:50.824223   58902 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:53:50.824328   58902 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:53:50.824402   58902 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:53:50.824500   58902 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:53:50.824583   58902 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:53:50.824697   58902 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:53:50.824826   58902 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:53:50.824896   58902 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:53:50.824984   58902 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:53:50.912363   58902 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:53:50.997719   58902 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 12:53:51.181182   58902 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:53:51.424413   58902 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:53:51.526033   58902 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:53:51.526547   58902 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:53:51.528947   58902 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:53:51.530665   58902 out.go:235]   - Booting up control plane ...
	I1202 12:53:51.530761   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:53:51.530862   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:53:51.530946   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:53:51.551867   58902 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:53:51.557869   58902 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:53:51.557960   58902 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:53:51.690048   58902 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 12:53:51.690190   58902 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 12:53:52.190616   58902 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.56624ms
	I1202 12:53:52.190735   58902 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 12:53:49.876477   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:52.948470   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:51.752318   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:54.250701   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:57.192620   58902 kubeadm.go:310] [api-check] The API server is healthy after 5.001974319s
	I1202 12:53:57.205108   58902 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 12:53:57.217398   58902 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 12:53:57.241642   58902 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 12:53:57.241842   58902 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-953044 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 12:53:57.252962   58902 kubeadm.go:310] [bootstrap-token] Using token: kqbw67.r50dkuvxntafmbtm
	I1202 12:53:57.254175   58902 out.go:235]   - Configuring RBAC rules ...
	I1202 12:53:57.254282   58902 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 12:53:57.258707   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 12:53:57.265127   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 12:53:57.268044   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 12:53:57.273630   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 12:53:57.276921   58902 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 12:53:57.598936   58902 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 12:53:58.031759   58902 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 12:53:58.598943   58902 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 12:53:58.599838   58902 kubeadm.go:310] 
	I1202 12:53:58.599900   58902 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 12:53:58.599927   58902 kubeadm.go:310] 
	I1202 12:53:58.600020   58902 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 12:53:58.600031   58902 kubeadm.go:310] 
	I1202 12:53:58.600067   58902 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 12:53:58.600150   58902 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 12:53:58.600249   58902 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 12:53:58.600266   58902 kubeadm.go:310] 
	I1202 12:53:58.600343   58902 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 12:53:58.600353   58902 kubeadm.go:310] 
	I1202 12:53:58.600418   58902 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 12:53:58.600429   58902 kubeadm.go:310] 
	I1202 12:53:58.600500   58902 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 12:53:58.600602   58902 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 12:53:58.600694   58902 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 12:53:58.600704   58902 kubeadm.go:310] 
	I1202 12:53:58.600878   58902 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 12:53:58.600996   58902 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 12:53:58.601008   58902 kubeadm.go:310] 
	I1202 12:53:58.601121   58902 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kqbw67.r50dkuvxntafmbtm \
	I1202 12:53:58.601248   58902 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 12:53:58.601281   58902 kubeadm.go:310] 	--control-plane 
	I1202 12:53:58.601298   58902 kubeadm.go:310] 
	I1202 12:53:58.601437   58902 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 12:53:58.601451   58902 kubeadm.go:310] 
	I1202 12:53:58.601570   58902 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kqbw67.r50dkuvxntafmbtm \
	I1202 12:53:58.601726   58902 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 12:53:58.601878   58902 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:53:58.602090   58902 cni.go:84] Creating CNI manager for ""
	I1202 12:53:58.602108   58902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:53:58.603597   58902 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 12:53:58.604832   58902 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 12:53:58.616597   58902 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 12:53:58.633585   58902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 12:53:58.633639   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:58.633694   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-953044 minikube.k8s.io/updated_at=2024_12_02T12_53_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=embed-certs-953044 minikube.k8s.io/primary=true
	I1202 12:53:58.843567   58902 ops.go:34] apiserver oom_adj: -16
	I1202 12:53:58.843643   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:56.252079   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:58.750596   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:59.344179   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:59.844667   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:00.343766   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:00.843808   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:01.343992   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:01.843750   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:02.344088   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:02.431425   58902 kubeadm.go:1113] duration metric: took 3.797838401s to wait for elevateKubeSystemPrivileges
	I1202 12:54:02.431466   58902 kubeadm.go:394] duration metric: took 4m53.907154853s to StartCluster
	I1202 12:54:02.431488   58902 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:54:02.431574   58902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:54:02.433388   58902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:54:02.433759   58902 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 12:54:02.433844   58902 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 12:54:02.433961   58902 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-953044"
	I1202 12:54:02.433979   58902 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-953044"
	I1202 12:54:02.433978   58902 config.go:182] Loaded profile config "embed-certs-953044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:54:02.433983   58902 addons.go:69] Setting default-storageclass=true in profile "embed-certs-953044"
	I1202 12:54:02.434009   58902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-953044"
	I1202 12:54:02.433983   58902 addons.go:69] Setting metrics-server=true in profile "embed-certs-953044"
	I1202 12:54:02.434082   58902 addons.go:234] Setting addon metrics-server=true in "embed-certs-953044"
	W1202 12:54:02.434090   58902 addons.go:243] addon metrics-server should already be in state true
	I1202 12:54:02.434121   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	W1202 12:54:02.433990   58902 addons.go:243] addon storage-provisioner should already be in state true
	I1202 12:54:02.434195   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	I1202 12:54:02.434500   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434544   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.434550   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434566   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434589   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.434606   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.435408   58902 out.go:177] * Verifying Kubernetes components...
	I1202 12:54:02.436893   58902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:54:02.450113   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I1202 12:54:02.450620   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.451022   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.451047   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.451376   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.451545   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.454345   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38059
	I1202 12:54:02.454346   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I1202 12:54:02.454788   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.454832   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.455251   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.455268   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.455281   58902 addons.go:234] Setting addon default-storageclass=true in "embed-certs-953044"
	W1202 12:54:02.455303   58902 addons.go:243] addon default-storageclass should already be in state true
	I1202 12:54:02.455336   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	I1202 12:54:02.455286   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.455377   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.455570   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.455696   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.455708   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.455739   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.456068   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.456085   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.456105   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.456122   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.470558   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I1202 12:54:02.470761   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I1202 12:54:02.470971   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471035   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43157
	I1202 12:54:02.471142   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471406   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.471426   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.471494   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471620   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.471633   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.471955   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472019   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.472035   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.472110   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.472127   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472446   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472647   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.472685   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.472721   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.474380   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.474597   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.476328   58902 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1202 12:54:02.476338   58902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:54:02.477992   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 12:54:02.478008   58902 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 12:54:02.478022   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.478549   58902 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 12:54:02.478567   58902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 12:54:02.478584   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.481364   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.481698   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.481725   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.481956   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.482008   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.482150   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.482274   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.482417   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.482503   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.482521   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.482785   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.483079   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.483352   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.483478   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.489285   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I1202 12:54:02.489644   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.490064   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.490085   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.490346   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.490510   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.491774   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.491961   58902 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 12:54:02.491974   58902 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 12:54:02.491990   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.494680   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.495069   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.495098   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.495259   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.495392   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.495582   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.495700   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.626584   58902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:54:02.650914   58902 node_ready.go:35] waiting up to 6m0s for node "embed-certs-953044" to be "Ready" ...
	I1202 12:54:02.658909   58902 node_ready.go:49] node "embed-certs-953044" has status "Ready":"True"
	I1202 12:54:02.658931   58902 node_ready.go:38] duration metric: took 7.986729ms for node "embed-certs-953044" to be "Ready" ...
	I1202 12:54:02.658939   58902 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:02.663878   58902 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:02.708572   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 12:54:02.711794   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 12:54:02.711813   58902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1202 12:54:02.729787   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 12:54:02.760573   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 12:54:02.760595   58902 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 12:54:02.814731   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 12:54:02.814756   58902 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 12:54:02.867045   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 12:54:03.549497   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.549532   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.549914   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.549970   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.549999   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.550010   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.550032   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.550256   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.550360   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.550336   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.551311   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.551333   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.551629   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.551591   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.551670   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.551686   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.551694   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.551907   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.552278   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.552295   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.577295   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.577322   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.577618   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.577631   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.577647   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.835721   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.835752   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.836073   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.836092   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.836108   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.836118   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.836460   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.836478   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.836489   58902 addons.go:475] Verifying addon metrics-server=true in "embed-certs-953044"
	I1202 12:54:03.836492   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.838858   58902 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1202 12:54:03.840263   58902 addons.go:510] duration metric: took 1.406440873s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1202 12:53:59.032460   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:02.100433   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:01.251084   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:03.252024   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:05.752273   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:04.669768   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:07.171770   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:08.180411   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:08.251624   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:10.751482   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:09.670413   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:10.669602   58902 pod_ready.go:93] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.669624   58902 pod_ready.go:82] duration metric: took 8.00571576s for pod "etcd-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.669634   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.674276   58902 pod_ready.go:93] pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.674293   58902 pod_ready.go:82] duration metric: took 4.652882ms for pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.674301   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.678330   58902 pod_ready.go:93] pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.678346   58902 pod_ready.go:82] duration metric: took 4.037883ms for pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.678354   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:12.184565   58902 pod_ready.go:93] pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:12.184591   58902 pod_ready.go:82] duration metric: took 1.506229118s for pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:12.184601   58902 pod_ready.go:39] duration metric: took 9.525652092s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:12.184622   58902 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:54:12.184683   58902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:54:12.204339   58902 api_server.go:72] duration metric: took 9.770541552s to wait for apiserver process to appear ...
	I1202 12:54:12.204361   58902 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:54:12.204383   58902 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8443/healthz ...
	I1202 12:54:12.208020   58902 api_server.go:279] https://192.168.72.203:8443/healthz returned 200:
	ok
	I1202 12:54:12.208957   58902 api_server.go:141] control plane version: v1.31.2
	I1202 12:54:12.208975   58902 api_server.go:131] duration metric: took 4.608337ms to wait for apiserver health ...
	I1202 12:54:12.208982   58902 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:54:12.215103   58902 system_pods.go:59] 9 kube-system pods found
	I1202 12:54:12.215123   58902 system_pods.go:61] "coredns-7c65d6cfc9-fwt6z" [06a23976-b261-4baa-8f66-e966addfb41a] Running
	I1202 12:54:12.215128   58902 system_pods.go:61] "coredns-7c65d6cfc9-tm4ct" [109d2f58-c2c8-4bf0-8232-fdbeb078305d] Running
	I1202 12:54:12.215132   58902 system_pods.go:61] "etcd-embed-certs-953044" [8f6fcd39-810b-45a4-91f0-9d449964beb1] Running
	I1202 12:54:12.215135   58902 system_pods.go:61] "kube-apiserver-embed-certs-953044" [976f7d34-0970-43b6-9b2a-18c2d7be0d63] Running
	I1202 12:54:12.215145   58902 system_pods.go:61] "kube-controller-manager-embed-certs-953044" [49c46bb7-8936-4c00-8764-56ae847aab27] Running
	I1202 12:54:12.215150   58902 system_pods.go:61] "kube-proxy-kg4z6" [c6b74e9c-47e4-4b1c-a219-685cc119219b] Running
	I1202 12:54:12.215157   58902 system_pods.go:61] "kube-scheduler-embed-certs-953044" [7940c2f7-a1b6-4c5b-879d-d72c3776ffbb] Running
	I1202 12:54:12.215171   58902 system_pods.go:61] "metrics-server-6867b74b74-fwhvq" [e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:12.215181   58902 system_pods.go:61] "storage-provisioner" [35fdd473-75b2-41d6-95bf-1bcab189dae5] Running
	I1202 12:54:12.215190   58902 system_pods.go:74] duration metric: took 6.203134ms to wait for pod list to return data ...
	I1202 12:54:12.215198   58902 default_sa.go:34] waiting for default service account to be created ...
	I1202 12:54:12.217406   58902 default_sa.go:45] found service account: "default"
	I1202 12:54:12.217421   58902 default_sa.go:55] duration metric: took 2.217536ms for default service account to be created ...
	I1202 12:54:12.217427   58902 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 12:54:12.221673   58902 system_pods.go:86] 9 kube-system pods found
	I1202 12:54:12.221690   58902 system_pods.go:89] "coredns-7c65d6cfc9-fwt6z" [06a23976-b261-4baa-8f66-e966addfb41a] Running
	I1202 12:54:12.221695   58902 system_pods.go:89] "coredns-7c65d6cfc9-tm4ct" [109d2f58-c2c8-4bf0-8232-fdbeb078305d] Running
	I1202 12:54:12.221701   58902 system_pods.go:89] "etcd-embed-certs-953044" [8f6fcd39-810b-45a4-91f0-9d449964beb1] Running
	I1202 12:54:12.221705   58902 system_pods.go:89] "kube-apiserver-embed-certs-953044" [976f7d34-0970-43b6-9b2a-18c2d7be0d63] Running
	I1202 12:54:12.221709   58902 system_pods.go:89] "kube-controller-manager-embed-certs-953044" [49c46bb7-8936-4c00-8764-56ae847aab27] Running
	I1202 12:54:12.221712   58902 system_pods.go:89] "kube-proxy-kg4z6" [c6b74e9c-47e4-4b1c-a219-685cc119219b] Running
	I1202 12:54:12.221716   58902 system_pods.go:89] "kube-scheduler-embed-certs-953044" [7940c2f7-a1b6-4c5b-879d-d72c3776ffbb] Running
	I1202 12:54:12.221724   58902 system_pods.go:89] "metrics-server-6867b74b74-fwhvq" [e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:12.221729   58902 system_pods.go:89] "storage-provisioner" [35fdd473-75b2-41d6-95bf-1bcab189dae5] Running
	I1202 12:54:12.221736   58902 system_pods.go:126] duration metric: took 4.304449ms to wait for k8s-apps to be running ...
	I1202 12:54:12.221745   58902 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 12:54:12.221780   58902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:54:12.238687   58902 system_svc.go:56] duration metric: took 16.934566ms WaitForService to wait for kubelet
	I1202 12:54:12.238707   58902 kubeadm.go:582] duration metric: took 9.804914519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:54:12.238722   58902 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:54:12.268746   58902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:54:12.268776   58902 node_conditions.go:123] node cpu capacity is 2
	I1202 12:54:12.268790   58902 node_conditions.go:105] duration metric: took 30.063656ms to run NodePressure ...
	I1202 12:54:12.268802   58902 start.go:241] waiting for startup goroutines ...
	I1202 12:54:12.268813   58902 start.go:246] waiting for cluster config update ...
	I1202 12:54:12.268828   58902 start.go:255] writing updated cluster config ...
	I1202 12:54:12.269149   58902 ssh_runner.go:195] Run: rm -f paused
	I1202 12:54:12.315523   58902 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 12:54:12.317559   58902 out.go:177] * Done! kubectl is now configured to use "embed-certs-953044" cluster and "default" namespace by default
	I1202 12:54:11.252465   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:13.251203   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:15.251601   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:17.332421   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:17.751347   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:20.252108   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:20.404508   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:21.252458   57877 pod_ready.go:82] duration metric: took 4m0.007570673s for pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace to be "Ready" ...
	E1202 12:54:21.252479   57877 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1202 12:54:21.252487   57877 pod_ready.go:39] duration metric: took 4m2.808635222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:21.252501   57877 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:54:21.252524   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:21.252565   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:21.311644   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:21.311663   57877 cri.go:89] found id: ""
	I1202 12:54:21.311670   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:21.311712   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.316826   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:21.316881   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:21.366930   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:21.366951   57877 cri.go:89] found id: ""
	I1202 12:54:21.366959   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:21.366999   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.371132   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:21.371194   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:21.405238   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:21.405261   57877 cri.go:89] found id: ""
	I1202 12:54:21.405270   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:21.405312   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.409631   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:21.409687   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:21.444516   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:21.444535   57877 cri.go:89] found id: ""
	I1202 12:54:21.444542   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:21.444583   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.448736   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:21.448796   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:21.485458   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:21.485484   57877 cri.go:89] found id: ""
	I1202 12:54:21.485494   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:21.485546   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.489882   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:21.489953   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:21.525951   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:21.525971   57877 cri.go:89] found id: ""
	I1202 12:54:21.525978   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:21.526028   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.530141   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:21.530186   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:21.564886   57877 cri.go:89] found id: ""
	I1202 12:54:21.564909   57877 logs.go:282] 0 containers: []
	W1202 12:54:21.564920   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:21.564928   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:21.564981   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:21.601560   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:21.601585   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:21.601593   57877 cri.go:89] found id: ""
	I1202 12:54:21.601603   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:21.601660   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.605710   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.609870   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:21.609892   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:21.645558   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:21.645581   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:21.680733   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:21.680764   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:21.731429   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:21.731452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:21.764658   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:21.764680   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:22.249475   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:22.249511   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:22.305127   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:22.305162   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:22.369496   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:22.369528   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:22.384486   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:22.384510   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:22.425402   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:22.425424   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:22.463801   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:22.463828   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:22.507022   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:22.507048   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:22.638422   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:22.638452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:25.190880   57877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:54:25.206797   57877 api_server.go:72] duration metric: took 4m14.027370187s to wait for apiserver process to appear ...
	I1202 12:54:25.206823   57877 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:54:25.206866   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:25.206924   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:25.241643   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:25.241669   57877 cri.go:89] found id: ""
	I1202 12:54:25.241680   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:25.241734   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.245997   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:25.246037   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:25.290955   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:25.290973   57877 cri.go:89] found id: ""
	I1202 12:54:25.290980   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:25.291029   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.295284   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:25.295329   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:25.333254   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:25.333275   57877 cri.go:89] found id: ""
	I1202 12:54:25.333284   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:25.333328   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.337649   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:25.337698   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:25.371662   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:25.371682   57877 cri.go:89] found id: ""
	I1202 12:54:25.371691   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:25.371739   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.376026   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:25.376075   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:25.411223   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:25.411238   57877 cri.go:89] found id: ""
	I1202 12:54:25.411245   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:25.411287   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.415307   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:25.415351   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:25.451008   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:25.451027   57877 cri.go:89] found id: ""
	I1202 12:54:25.451035   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:25.451089   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.455681   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:25.455727   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:25.499293   57877 cri.go:89] found id: ""
	I1202 12:54:25.499315   57877 logs.go:282] 0 containers: []
	W1202 12:54:25.499325   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:25.499332   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:25.499377   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:25.533874   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:25.533896   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:25.533903   57877 cri.go:89] found id: ""
	I1202 12:54:25.533912   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:25.533961   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.537993   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.541881   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:25.541899   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:25.645488   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:25.645512   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:25.683783   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:25.683807   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:26.120334   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:26.120367   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:26.484425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:26.190493   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:26.190521   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:26.235397   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:26.235421   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:26.285411   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:26.285452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:26.331807   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:26.331836   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:26.374437   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:26.374461   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:26.436459   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:26.436487   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:26.472126   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:26.472162   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:26.504819   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:26.504840   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:26.518789   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:26.518821   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:29.069521   57877 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I1202 12:54:29.074072   57877 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I1202 12:54:29.075022   57877 api_server.go:141] control plane version: v1.31.2
	I1202 12:54:29.075041   57877 api_server.go:131] duration metric: took 3.868210222s to wait for apiserver health ...
	I1202 12:54:29.075048   57877 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:54:29.075069   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:29.075112   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:29.110715   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:29.110735   57877 cri.go:89] found id: ""
	I1202 12:54:29.110742   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:29.110790   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.114994   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:29.115040   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:29.150431   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:29.150459   57877 cri.go:89] found id: ""
	I1202 12:54:29.150468   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:29.150525   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.154909   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:29.154967   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:29.198139   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:29.198162   57877 cri.go:89] found id: ""
	I1202 12:54:29.198172   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:29.198224   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.202969   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:29.203031   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:29.243771   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:29.243795   57877 cri.go:89] found id: ""
	I1202 12:54:29.243802   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:29.243843   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.248039   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:29.248106   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:29.286473   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:29.286492   57877 cri.go:89] found id: ""
	I1202 12:54:29.286498   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:29.286538   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.290543   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:29.290590   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:29.327899   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:29.327916   57877 cri.go:89] found id: ""
	I1202 12:54:29.327922   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:29.327961   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.332516   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:29.332571   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:29.368204   57877 cri.go:89] found id: ""
	I1202 12:54:29.368236   57877 logs.go:282] 0 containers: []
	W1202 12:54:29.368247   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:29.368255   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:29.368301   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:29.407333   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:29.407358   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:29.407364   57877 cri.go:89] found id: ""
	I1202 12:54:29.407372   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:29.407425   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.412153   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.416525   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:29.416548   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:29.457360   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:29.457394   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:29.495662   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:29.495691   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:29.549304   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:29.549331   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:29.585693   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:29.585718   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:29.621888   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:29.621912   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:29.670118   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:29.670153   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:29.685833   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:29.685855   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:29.792525   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:29.792555   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:29.837090   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:29.837138   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:29.872862   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:29.872893   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:30.228483   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:30.228523   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:30.298252   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:30.298285   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:32.851536   57877 system_pods.go:59] 8 kube-system pods found
	I1202 12:54:32.851562   57877 system_pods.go:61] "coredns-7c65d6cfc9-cvfc9" [f88088d1-7d48-498a-8251-f3a9ff436583] Running
	I1202 12:54:32.851567   57877 system_pods.go:61] "etcd-no-preload-658679" [950bf61a-2e04-43f9-805b-9cb98708d604] Running
	I1202 12:54:32.851571   57877 system_pods.go:61] "kube-apiserver-no-preload-658679" [3d4bfb53-da01-4973-bb3e-263a6d68463c] Running
	I1202 12:54:32.851574   57877 system_pods.go:61] "kube-controller-manager-no-preload-658679" [17db9644-06c4-4471-9186-e4de3efc7575] Running
	I1202 12:54:32.851577   57877 system_pods.go:61] "kube-proxy-2xf6j" [477778b7-12f0-4055-a583-edbf84c1a635] Running
	I1202 12:54:32.851580   57877 system_pods.go:61] "kube-scheduler-no-preload-658679" [617f769c-0ae5-4942-80c5-f85fafbc389e] Running
	I1202 12:54:32.851586   57877 system_pods.go:61] "metrics-server-6867b74b74-sn7tq" [8171d626-7036-4585-a967-8ff54f00cfc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:32.851590   57877 system_pods.go:61] "storage-provisioner" [5736f43f-6d15-41de-856a-048887f08742] Running
	I1202 12:54:32.851597   57877 system_pods.go:74] duration metric: took 3.776542886s to wait for pod list to return data ...
	I1202 12:54:32.851604   57877 default_sa.go:34] waiting for default service account to be created ...
	I1202 12:54:32.853911   57877 default_sa.go:45] found service account: "default"
	I1202 12:54:32.853928   57877 default_sa.go:55] duration metric: took 2.318516ms for default service account to be created ...
	I1202 12:54:32.853935   57877 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 12:54:32.858485   57877 system_pods.go:86] 8 kube-system pods found
	I1202 12:54:32.858508   57877 system_pods.go:89] "coredns-7c65d6cfc9-cvfc9" [f88088d1-7d48-498a-8251-f3a9ff436583] Running
	I1202 12:54:32.858513   57877 system_pods.go:89] "etcd-no-preload-658679" [950bf61a-2e04-43f9-805b-9cb98708d604] Running
	I1202 12:54:32.858519   57877 system_pods.go:89] "kube-apiserver-no-preload-658679" [3d4bfb53-da01-4973-bb3e-263a6d68463c] Running
	I1202 12:54:32.858523   57877 system_pods.go:89] "kube-controller-manager-no-preload-658679" [17db9644-06c4-4471-9186-e4de3efc7575] Running
	I1202 12:54:32.858526   57877 system_pods.go:89] "kube-proxy-2xf6j" [477778b7-12f0-4055-a583-edbf84c1a635] Running
	I1202 12:54:32.858530   57877 system_pods.go:89] "kube-scheduler-no-preload-658679" [617f769c-0ae5-4942-80c5-f85fafbc389e] Running
	I1202 12:54:32.858536   57877 system_pods.go:89] "metrics-server-6867b74b74-sn7tq" [8171d626-7036-4585-a967-8ff54f00cfc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:32.858540   57877 system_pods.go:89] "storage-provisioner" [5736f43f-6d15-41de-856a-048887f08742] Running
	I1202 12:54:32.858547   57877 system_pods.go:126] duration metric: took 4.607096ms to wait for k8s-apps to be running ...
	I1202 12:54:32.858555   57877 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 12:54:32.858592   57877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:54:32.874267   57877 system_svc.go:56] duration metric: took 15.704013ms WaitForService to wait for kubelet
	I1202 12:54:32.874293   57877 kubeadm.go:582] duration metric: took 4m21.694870267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:54:32.874311   57877 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:54:32.877737   57877 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:54:32.877757   57877 node_conditions.go:123] node cpu capacity is 2
	I1202 12:54:32.877768   57877 node_conditions.go:105] duration metric: took 3.452076ms to run NodePressure ...
	I1202 12:54:32.877782   57877 start.go:241] waiting for startup goroutines ...
	I1202 12:54:32.877791   57877 start.go:246] waiting for cluster config update ...
	I1202 12:54:32.877807   57877 start.go:255] writing updated cluster config ...
	I1202 12:54:32.878129   57877 ssh_runner.go:195] Run: rm -f paused
	I1202 12:54:32.926190   57877 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 12:54:32.927894   57877 out.go:177] * Done! kubectl is now configured to use "no-preload-658679" cluster and "default" namespace by default
	I1202 12:54:29.556420   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:35.636450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:38.708454   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:44.788462   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:47.860484   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:53.940448   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:57.012536   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:03.092433   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:06.164483   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:12.244464   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:15.316647   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:21.396479   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:24.468584   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:32.968600   59162 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:55:32.968731   59162 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:55:32.970229   59162 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:55:32.970291   59162 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:55:32.970394   59162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:55:32.970513   59162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:55:32.970629   59162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:55:32.970717   59162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:55:32.972396   59162 out.go:235]   - Generating certificates and keys ...
	I1202 12:55:32.972491   59162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:55:32.972577   59162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:55:32.972734   59162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:55:32.972823   59162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:55:32.972926   59162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:55:32.973006   59162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:55:32.973108   59162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:55:32.973192   59162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:55:32.973318   59162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:55:32.973429   59162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:55:32.973501   59162 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:55:32.973594   59162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:55:32.973658   59162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:55:32.973722   59162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:55:32.973819   59162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:55:32.973903   59162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:55:32.974041   59162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:55:32.974157   59162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:55:32.974206   59162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:55:32.974301   59162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:55:32.976508   59162 out.go:235]   - Booting up control plane ...
	I1202 12:55:32.976620   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:55:32.976741   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:55:32.976842   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:55:32.976957   59162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:55:32.977191   59162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:55:32.977281   59162 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:55:32.977342   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.977505   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.977579   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.977795   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.977906   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978091   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978174   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978394   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978497   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978743   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978756   59162 kubeadm.go:310] 
	I1202 12:55:32.978801   59162 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:55:32.978859   59162 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:55:32.978868   59162 kubeadm.go:310] 
	I1202 12:55:32.978914   59162 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:55:32.978961   59162 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:55:32.979078   59162 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:55:32.979088   59162 kubeadm.go:310] 
	I1202 12:55:32.979230   59162 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:55:32.979279   59162 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:55:32.979337   59162 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:55:32.979346   59162 kubeadm.go:310] 
	I1202 12:55:32.979484   59162 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:55:32.979580   59162 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:55:32.979593   59162 kubeadm.go:310] 
	I1202 12:55:32.979721   59162 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:55:32.979848   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:55:32.979968   59162 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:55:32.980059   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:55:32.980127   59162 kubeadm.go:310] 
	W1202 12:55:32.980202   59162 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 12:55:32.980267   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:55:33.452325   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:55:33.467527   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:55:33.477494   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:55:33.477522   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:55:33.477575   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:55:33.487333   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:55:33.487395   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:55:33.497063   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:55:33.506552   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:55:33.506605   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:55:33.515968   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:55:33.524922   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:55:33.524956   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:55:33.534339   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:55:33.543370   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:55:33.543403   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:55:33.552970   59162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:55:33.624833   59162 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:55:33.624990   59162 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:55:33.767688   59162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:55:33.767796   59162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:55:33.767909   59162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:55:33.935314   59162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:55:30.548478   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:33.624512   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:33.937193   59162 out.go:235]   - Generating certificates and keys ...
	I1202 12:55:33.937290   59162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:55:33.937402   59162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:55:33.937513   59162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:55:33.937620   59162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:55:33.937722   59162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:55:33.937791   59162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:55:33.937845   59162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:55:33.937896   59162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:55:33.937964   59162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:55:33.938028   59162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:55:33.938061   59162 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:55:33.938108   59162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:55:34.167163   59162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:55:35.008947   59162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:55:35.304057   59162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:55:35.385824   59162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:55:35.409687   59162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:55:35.413131   59162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:55:35.413218   59162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:55:35.569508   59162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:55:35.571455   59162 out.go:235]   - Booting up control plane ...
	I1202 12:55:35.571596   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:55:35.578476   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:55:35.579686   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:55:35.580586   59162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:55:35.582869   59162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:55:39.700423   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:42.772498   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:48.852452   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:51.924490   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:58.004488   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:01.076456   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:07.160425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:10.228467   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:15.585409   59162 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:56:15.585530   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:15.585792   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:16.308453   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:20.586011   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:20.586257   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:19.380488   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:25.460451   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:28.532425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:30.586783   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:30.587053   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:31.533399   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:56:31.533454   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:31.533725   61173 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653783"
	I1202 12:56:31.533749   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:31.533914   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:31.535344   61173 machine.go:96] duration metric: took 4m37.429393672s to provisionDockerMachine
	I1202 12:56:31.535386   61173 fix.go:56] duration metric: took 4m37.448634942s for fixHost
	I1202 12:56:31.535394   61173 start.go:83] releasing machines lock for "default-k8s-diff-port-653783", held for 4m37.448659715s
	W1202 12:56:31.535408   61173 start.go:714] error starting host: provision: host is not running
	W1202 12:56:31.535498   61173 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1202 12:56:31.535507   61173 start.go:729] Will try again in 5 seconds ...
	I1202 12:56:36.536323   61173 start.go:360] acquireMachinesLock for default-k8s-diff-port-653783: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:56:36.536434   61173 start.go:364] duration metric: took 71.395µs to acquireMachinesLock for "default-k8s-diff-port-653783"
	I1202 12:56:36.536463   61173 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:56:36.536471   61173 fix.go:54] fixHost starting: 
	I1202 12:56:36.536763   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:56:36.536790   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:56:36.551482   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38745
	I1202 12:56:36.551962   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:56:36.552383   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:56:36.552405   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:56:36.552689   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:56:36.552849   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:36.552968   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 12:56:36.554481   61173 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653783: state=Stopped err=<nil>
	I1202 12:56:36.554501   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	W1202 12:56:36.554652   61173 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:56:36.556508   61173 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653783" ...
	I1202 12:56:36.557534   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Start
	I1202 12:56:36.557690   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring networks are active...
	I1202 12:56:36.558371   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring network default is active
	I1202 12:56:36.558713   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring network mk-default-k8s-diff-port-653783 is active
	I1202 12:56:36.559023   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Getting domain xml...
	I1202 12:56:36.559739   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Creating domain...
	I1202 12:56:37.799440   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting to get IP...
	I1202 12:56:37.800397   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.800875   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.800918   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:37.800836   62278 retry.go:31] will retry after 192.811495ms: waiting for machine to come up
	I1202 12:56:37.995285   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.995743   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.995771   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:37.995697   62278 retry.go:31] will retry after 367.440749ms: waiting for machine to come up
	I1202 12:56:38.365229   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.365781   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.365810   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:38.365731   62278 retry.go:31] will retry after 350.196014ms: waiting for machine to come up
	I1202 12:56:38.717121   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.717650   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.717681   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:38.717590   62278 retry.go:31] will retry after 557.454725ms: waiting for machine to come up
	I1202 12:56:39.276110   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:39.276602   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:39.276631   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:39.276536   62278 retry.go:31] will retry after 735.275509ms: waiting for machine to come up
	I1202 12:56:40.013307   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.013865   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.013888   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:40.013833   62278 retry.go:31] will retry after 613.45623ms: waiting for machine to come up
	I1202 12:56:40.629220   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.629731   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.629776   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:40.629678   62278 retry.go:31] will retry after 748.849722ms: waiting for machine to come up
	I1202 12:56:41.380615   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:41.381052   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:41.381075   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:41.381023   62278 retry.go:31] will retry after 1.342160202s: waiting for machine to come up
	I1202 12:56:42.724822   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:42.725315   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:42.725355   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:42.725251   62278 retry.go:31] will retry after 1.693072543s: waiting for machine to come up
	I1202 12:56:44.420249   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:44.420700   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:44.420721   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:44.420658   62278 retry.go:31] will retry after 2.210991529s: waiting for machine to come up
	I1202 12:56:46.633486   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:46.633847   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:46.633875   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:46.633807   62278 retry.go:31] will retry after 2.622646998s: waiting for machine to come up
	I1202 12:56:50.587516   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:50.587731   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:49.257705   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:49.258232   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:49.258260   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:49.258186   62278 retry.go:31] will retry after 2.375973874s: waiting for machine to come up
	I1202 12:56:51.636055   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:51.636422   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:51.636450   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:51.636379   62278 retry.go:31] will retry after 3.118442508s: waiting for machine to come up
	I1202 12:56:54.757260   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.757665   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Found IP for machine: 192.168.39.154
	I1202 12:56:54.757689   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has current primary IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.757697   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Reserving static IP address...
	I1202 12:56:54.758088   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653783", mac: "52:54:00:00:f6:f0", ip: "192.168.39.154"} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.758108   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Reserved static IP address: 192.168.39.154
	I1202 12:56:54.758120   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | skip adding static IP to network mk-default-k8s-diff-port-653783 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653783", mac: "52:54:00:00:f6:f0", ip: "192.168.39.154"}
	I1202 12:56:54.758134   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Getting to WaitForSSH function...
	I1202 12:56:54.758142   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for SSH to be available...
	I1202 12:56:54.760333   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.760643   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.760672   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.760789   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Using SSH client type: external
	I1202 12:56:54.760812   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa (-rw-------)
	I1202 12:56:54.760855   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 12:56:54.760880   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | About to run SSH command:
	I1202 12:56:54.760892   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | exit 0
	I1202 12:56:54.884099   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | SSH cmd err, output: <nil>: 
	I1202 12:56:54.884435   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetConfigRaw
	I1202 12:56:54.885058   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:54.887519   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.887823   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.887854   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.888041   61173 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/config.json ...
	I1202 12:56:54.888333   61173 machine.go:93] provisionDockerMachine start ...
	I1202 12:56:54.888352   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:54.888564   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:54.890754   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.891062   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.891090   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.891254   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:54.891423   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:54.891560   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:54.891709   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:54.891851   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:54.892053   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:54.892070   61173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:56:54.996722   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1202 12:56:54.996751   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:54.996974   61173 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653783"
	I1202 12:56:54.997004   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:54.997202   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.000026   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.000425   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.000453   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.000624   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.000810   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.000978   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.001122   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.001308   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.001540   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.001562   61173 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653783 && echo "default-k8s-diff-port-653783" | sudo tee /etc/hostname
	I1202 12:56:55.122933   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653783
	
	I1202 12:56:55.122965   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.125788   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.126182   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.126219   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.126406   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.126555   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.126718   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.126834   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.126973   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.127180   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.127206   61173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653783/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 12:56:55.242263   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:56:55.242291   61173 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 12:56:55.242331   61173 buildroot.go:174] setting up certificates
	I1202 12:56:55.242340   61173 provision.go:84] configureAuth start
	I1202 12:56:55.242350   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:55.242604   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:55.245340   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.245685   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.245719   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.245882   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.248090   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.248481   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.248512   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.248659   61173 provision.go:143] copyHostCerts
	I1202 12:56:55.248718   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 12:56:55.248733   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:56:55.248810   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 12:56:55.248920   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 12:56:55.248931   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:56:55.248965   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 12:56:55.249039   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 12:56:55.249049   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:56:55.249081   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 12:56:55.249152   61173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653783 san=[127.0.0.1 192.168.39.154 default-k8s-diff-port-653783 localhost minikube]
	I1202 12:56:55.688887   61173 provision.go:177] copyRemoteCerts
	I1202 12:56:55.688948   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 12:56:55.688976   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.691486   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.691865   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.691896   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.692056   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.692239   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.692403   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.692524   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:55.777670   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 12:56:55.802466   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1202 12:56:55.826639   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 12:56:55.850536   61173 provision.go:87] duration metric: took 608.183552ms to configureAuth
	I1202 12:56:55.850560   61173 buildroot.go:189] setting minikube options for container-runtime
	I1202 12:56:55.850731   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:56:55.850813   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.853607   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.853991   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.854024   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.854122   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.854294   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.854436   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.854598   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.854734   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.854883   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.854899   61173 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 12:56:56.083902   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 12:56:56.083931   61173 machine.go:96] duration metric: took 1.195584241s to provisionDockerMachine
	I1202 12:56:56.083944   61173 start.go:293] postStartSetup for "default-k8s-diff-port-653783" (driver="kvm2")
	I1202 12:56:56.083957   61173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 12:56:56.083974   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.084276   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 12:56:56.084307   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.087400   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.087727   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.087750   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.087909   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.088088   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.088272   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.088448   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.170612   61173 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 12:56:56.175344   61173 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 12:56:56.175366   61173 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 12:56:56.175454   61173 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 12:56:56.175529   61173 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 12:56:56.175610   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 12:56:56.185033   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:56:56.209569   61173 start.go:296] duration metric: took 125.611321ms for postStartSetup
	I1202 12:56:56.209605   61173 fix.go:56] duration metric: took 19.673134089s for fixHost
	I1202 12:56:56.209623   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.212600   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.212883   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.212923   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.213137   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.213395   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.213575   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.213708   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.213854   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:56.214014   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:56.214032   61173 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 12:56:56.320723   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733144216.287359296
	
	I1202 12:56:56.320744   61173 fix.go:216] guest clock: 1733144216.287359296
	I1202 12:56:56.320753   61173 fix.go:229] Guest: 2024-12-02 12:56:56.287359296 +0000 UTC Remote: 2024-12-02 12:56:56.209609687 +0000 UTC m=+302.261021771 (delta=77.749609ms)
	I1202 12:56:56.320776   61173 fix.go:200] guest clock delta is within tolerance: 77.749609ms
	I1202 12:56:56.320781   61173 start.go:83] releasing machines lock for "default-k8s-diff-port-653783", held for 19.784333398s
	I1202 12:56:56.320797   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.321011   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:56.323778   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.324117   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.324136   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.324289   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324759   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324921   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324984   61173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 12:56:56.325034   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.325138   61173 ssh_runner.go:195] Run: cat /version.json
	I1202 12:56:56.325164   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.327744   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328000   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328058   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.328083   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328262   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.328373   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.328411   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328411   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.328584   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.328584   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.328774   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.328769   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.328908   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.329007   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.405370   61173 ssh_runner.go:195] Run: systemctl --version
	I1202 12:56:56.427743   61173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 12:56:56.574416   61173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 12:56:56.580858   61173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 12:56:56.580948   61173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 12:56:56.597406   61173 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 12:56:56.597427   61173 start.go:495] detecting cgroup driver to use...
	I1202 12:56:56.597472   61173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 12:56:56.612456   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 12:56:56.625811   61173 docker.go:217] disabling cri-docker service (if available) ...
	I1202 12:56:56.625847   61173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 12:56:56.642677   61173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 12:56:56.657471   61173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 12:56:56.776273   61173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 12:56:56.949746   61173 docker.go:233] disabling docker service ...
	I1202 12:56:56.949807   61173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 12:56:56.964275   61173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 12:56:56.977461   61173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 12:56:57.091134   61173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 12:56:57.209421   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 12:56:57.223153   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 12:56:57.241869   61173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 12:56:57.241933   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.252117   61173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 12:56:57.252174   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.262799   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.275039   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.285987   61173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 12:56:57.296968   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.307242   61173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.324555   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.335395   61173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 12:56:57.344411   61173 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 12:56:57.344450   61173 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 12:56:57.357400   61173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 12:56:57.366269   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:56:57.486764   61173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 12:56:57.574406   61173 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 12:56:57.574464   61173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 12:56:57.579268   61173 start.go:563] Will wait 60s for crictl version
	I1202 12:56:57.579328   61173 ssh_runner.go:195] Run: which crictl
	I1202 12:56:57.583110   61173 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 12:56:57.621921   61173 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 12:56:57.622003   61173 ssh_runner.go:195] Run: crio --version
	I1202 12:56:57.650543   61173 ssh_runner.go:195] Run: crio --version
	I1202 12:56:57.683842   61173 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 12:56:57.684861   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:57.687188   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:57.687459   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:57.687505   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:57.687636   61173 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 12:56:57.691723   61173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:56:57.704869   61173 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 12:56:57.704999   61173 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:56:57.705054   61173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:56:57.738780   61173 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 12:56:57.738828   61173 ssh_runner.go:195] Run: which lz4
	I1202 12:56:57.743509   61173 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 12:56:57.747763   61173 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 12:56:57.747784   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 12:56:59.105988   61173 crio.go:462] duration metric: took 1.362506994s to copy over tarball
	I1202 12:56:59.106062   61173 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 12:57:01.191007   61173 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.084920502s)
	I1202 12:57:01.191031   61173 crio.go:469] duration metric: took 2.085014298s to extract the tarball
	I1202 12:57:01.191038   61173 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 12:57:01.229238   61173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:57:01.272133   61173 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 12:57:01.272156   61173 cache_images.go:84] Images are preloaded, skipping loading
	I1202 12:57:01.272164   61173 kubeadm.go:934] updating node { 192.168.39.154 8444 v1.31.2 crio true true} ...
	I1202 12:57:01.272272   61173 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 12:57:01.272330   61173 ssh_runner.go:195] Run: crio config
	I1202 12:57:01.318930   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:57:01.318957   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:57:01.318968   61173 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 12:57:01.318994   61173 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653783 NodeName:default-k8s-diff-port-653783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 12:57:01.319125   61173 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.154"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 12:57:01.319184   61173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 12:57:01.330162   61173 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 12:57:01.330226   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 12:57:01.340217   61173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 12:57:01.356786   61173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 12:57:01.373210   61173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1202 12:57:01.390184   61173 ssh_runner.go:195] Run: grep 192.168.39.154	control-plane.minikube.internal$ /etc/hosts
	I1202 12:57:01.394099   61173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:57:01.406339   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:57:01.526518   61173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:57:01.543879   61173 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783 for IP: 192.168.39.154
	I1202 12:57:01.543899   61173 certs.go:194] generating shared ca certs ...
	I1202 12:57:01.543920   61173 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:57:01.544070   61173 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 12:57:01.544134   61173 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 12:57:01.544147   61173 certs.go:256] generating profile certs ...
	I1202 12:57:01.544285   61173 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/client.key
	I1202 12:57:01.544377   61173 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.key.44fa7240
	I1202 12:57:01.544429   61173 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.key
	I1202 12:57:01.544579   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 12:57:01.544608   61173 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 12:57:01.544617   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 12:57:01.544636   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 12:57:01.544659   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 12:57:01.544688   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 12:57:01.544727   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:57:01.545381   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 12:57:01.580933   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 12:57:01.621199   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 12:57:01.648996   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 12:57:01.681428   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 12:57:01.710907   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 12:57:01.741414   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 12:57:01.766158   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 12:57:01.789460   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 12:57:01.812569   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 12:57:01.836007   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 12:57:01.858137   61173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 12:57:01.874315   61173 ssh_runner.go:195] Run: openssl version
	I1202 12:57:01.880190   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 12:57:01.893051   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.898250   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.898306   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.904207   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 12:57:01.915975   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 12:57:01.927977   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.932436   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.932478   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.938049   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 12:57:01.948744   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 12:57:01.959472   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.963806   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.963839   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.969412   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 12:57:01.980743   61173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:57:01.986211   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 12:57:01.992717   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 12:57:01.998781   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 12:57:02.004934   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 12:57:02.010903   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 12:57:02.016677   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 12:57:02.022595   61173 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:57:02.022680   61173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 12:57:02.022711   61173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:57:02.060425   61173 cri.go:89] found id: ""
	I1202 12:57:02.060497   61173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 12:57:02.070807   61173 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1202 12:57:02.070827   61173 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1202 12:57:02.070868   61173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 12:57:02.081036   61173 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 12:57:02.082088   61173 kubeconfig.go:125] found "default-k8s-diff-port-653783" server: "https://192.168.39.154:8444"
	I1202 12:57:02.084179   61173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 12:57:02.094381   61173 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.154
	I1202 12:57:02.094429   61173 kubeadm.go:1160] stopping kube-system containers ...
	I1202 12:57:02.094441   61173 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 12:57:02.094485   61173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:57:02.129098   61173 cri.go:89] found id: ""
	I1202 12:57:02.129152   61173 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 12:57:02.146731   61173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:57:02.156860   61173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:57:02.156881   61173 kubeadm.go:157] found existing configuration files:
	
	I1202 12:57:02.156924   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1202 12:57:02.166273   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:57:02.166322   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:57:02.175793   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1202 12:57:02.184665   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:57:02.184707   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:57:02.194243   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1202 12:57:02.203173   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:57:02.203217   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:57:02.212563   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1202 12:57:02.221640   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:57:02.221682   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:57:02.230764   61173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:57:02.241691   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:02.353099   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.283720   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.487082   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.564623   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.644136   61173 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:57:03.644219   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:04.144882   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:04.644873   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.144778   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.645022   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.662892   61173 api_server.go:72] duration metric: took 2.01875734s to wait for apiserver process to appear ...
	I1202 12:57:05.662920   61173 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:57:05.662943   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.328451   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 12:57:08.328479   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 12:57:08.328492   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.368504   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 12:57:08.368547   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 12:57:08.664065   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.681253   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:57:08.681319   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:57:09.163310   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:09.169674   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:57:09.169699   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:57:09.663220   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:09.667397   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 200:
	ok
	I1202 12:57:09.675558   61173 api_server.go:141] control plane version: v1.31.2
	I1202 12:57:09.675582   61173 api_server.go:131] duration metric: took 4.012653559s to wait for apiserver health ...
	I1202 12:57:09.675592   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:57:09.675601   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:57:09.677275   61173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 12:57:09.678527   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 12:57:09.690640   61173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 12:57:09.708185   61173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:57:09.724719   61173 system_pods.go:59] 8 kube-system pods found
	I1202 12:57:09.724747   61173 system_pods.go:61] "coredns-7c65d6cfc9-7g74d" [a35c0ad2-6c02-4e14-afe5-887b3b5fd70f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 12:57:09.724755   61173 system_pods.go:61] "etcd-default-k8s-diff-port-653783" [25bc45db-481f-4c88-853b-105a32e1e8e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 12:57:09.724763   61173 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653783" [af0f2123-8eac-4f90-bc06-1fc1cb10deda] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 12:57:09.724769   61173 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653783" [c18b1705-438b-4954-941e-cfe5a3a0f6fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 12:57:09.724777   61173 system_pods.go:61] "kube-proxy-5t9gh" [35d08e89-5ad8-4fcb-9bff-5c12bc1fb497] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 12:57:09.724782   61173 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653783" [0db501e4-36fb-4a67-b11d-d6d9f3fa1383] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 12:57:09.724789   61173 system_pods.go:61] "metrics-server-6867b74b74-9v79b" [418c7615-5d41-4a24-b497-674f55573a0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:57:09.724794   61173 system_pods.go:61] "storage-provisioner" [dab6b0c7-8e10-435f-a57c-76044eaa11c0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 12:57:09.724799   61173 system_pods.go:74] duration metric: took 16.592713ms to wait for pod list to return data ...
	I1202 12:57:09.724808   61173 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:57:09.731235   61173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:57:09.731260   61173 node_conditions.go:123] node cpu capacity is 2
	I1202 12:57:09.731274   61173 node_conditions.go:105] duration metric: took 6.4605ms to run NodePressure ...
	I1202 12:57:09.731293   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:10.021346   61173 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1202 12:57:10.025152   61173 kubeadm.go:739] kubelet initialised
	I1202 12:57:10.025171   61173 kubeadm.go:740] duration metric: took 3.798597ms waiting for restarted kubelet to initialise ...
	I1202 12:57:10.025178   61173 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:57:10.029834   61173 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.033699   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.033718   61173 pod_ready.go:82] duration metric: took 3.86169ms for pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.033726   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.033731   61173 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.037291   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.037308   61173 pod_ready.go:82] duration metric: took 3.569468ms for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.037317   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.037322   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.041016   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.041035   61173 pod_ready.go:82] duration metric: took 3.705222ms for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.041046   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.041071   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:12.047581   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:14.048663   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:16.547831   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:19.047816   61173 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:19.047839   61173 pod_ready.go:82] duration metric: took 9.006753973s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.047850   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5t9gh" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.052277   61173 pod_ready.go:93] pod "kube-proxy-5t9gh" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:19.052296   61173 pod_ready.go:82] duration metric: took 4.440131ms for pod "kube-proxy-5t9gh" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.052305   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:21.058989   61173 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:22.558501   61173 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:22.558524   61173 pod_ready.go:82] duration metric: took 3.506212984s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:22.558533   61173 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:24.564668   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:27.064209   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:30.586451   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:57:30.586705   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:57:30.586735   59162 kubeadm.go:310] 
	I1202 12:57:30.586786   59162 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:57:30.586842   59162 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:57:30.586859   59162 kubeadm.go:310] 
	I1202 12:57:30.586924   59162 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:57:30.586990   59162 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:57:30.587140   59162 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:57:30.587152   59162 kubeadm.go:310] 
	I1202 12:57:30.587292   59162 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:57:30.587347   59162 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:57:30.587387   59162 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:57:30.587405   59162 kubeadm.go:310] 
	I1202 12:57:30.587557   59162 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:57:30.587642   59162 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:57:30.587655   59162 kubeadm.go:310] 
	I1202 12:57:30.587751   59162 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:57:30.587841   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:57:30.587923   59162 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:57:30.588029   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:57:30.588043   59162 kubeadm.go:310] 
	I1202 12:57:30.588959   59162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:57:30.589087   59162 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:57:30.589211   59162 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:57:30.589277   59162 kubeadm.go:394] duration metric: took 7m57.557592718s to StartCluster
	I1202 12:57:30.589312   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:57:30.589358   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:57:30.634368   59162 cri.go:89] found id: ""
	I1202 12:57:30.634402   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.634414   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:57:30.634423   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:57:30.634489   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:57:30.669582   59162 cri.go:89] found id: ""
	I1202 12:57:30.669605   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.669617   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:57:30.669625   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:57:30.669679   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:57:30.707779   59162 cri.go:89] found id: ""
	I1202 12:57:30.707805   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.707815   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:57:30.707823   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:57:30.707878   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:57:30.745724   59162 cri.go:89] found id: ""
	I1202 12:57:30.745751   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.745761   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:57:30.745768   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:57:30.745816   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:57:30.782946   59162 cri.go:89] found id: ""
	I1202 12:57:30.782969   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.782980   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:57:30.782987   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:57:30.783040   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:57:30.821743   59162 cri.go:89] found id: ""
	I1202 12:57:30.821776   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.821787   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:57:30.821795   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:57:30.821843   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:57:30.859754   59162 cri.go:89] found id: ""
	I1202 12:57:30.859783   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.859793   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:57:30.859801   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:57:30.859876   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:57:30.893632   59162 cri.go:89] found id: ""
	I1202 12:57:30.893660   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.893668   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:57:30.893677   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:57:30.893690   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:57:30.946387   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:57:30.946413   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:57:30.960540   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:57:30.960565   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:57:31.038246   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:57:31.038267   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:57:31.038279   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:57:31.155549   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:57:31.155584   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 12:57:31.221709   59162 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1202 12:57:31.221773   59162 out.go:270] * 
	W1202 12:57:31.221846   59162 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:57:31.221868   59162 out.go:270] * 
	W1202 12:57:31.222987   59162 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 12:57:31.226661   59162 out.go:201] 
	W1202 12:57:31.227691   59162 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:57:31.227739   59162 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 12:57:31.227763   59162 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 12:57:31.229696   59162 out.go:201] 
	
	
	==> CRI-O <==
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.322712231Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144252322690309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4778645a-f6f5-4153-84b6-40b5c54fd903 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.323203474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a28d7f24-aa63-43d4-893e-f20f3d8828c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.323256585Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a28d7f24-aa63-43d4-893e-f20f3d8828c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.323286309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a28d7f24-aa63-43d4-893e-f20f3d8828c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.360439715Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24d1f4fe-a4e7-41a8-a29f-d9ff1bb75cfc name=/runtime.v1.RuntimeService/Version
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.360509876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24d1f4fe-a4e7-41a8-a29f-d9ff1bb75cfc name=/runtime.v1.RuntimeService/Version
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.361680026Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2016365e-5fb8-498c-bf11-4e004bf6f83d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.362061593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144252362041471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2016365e-5fb8-498c-bf11-4e004bf6f83d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.362573679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbc40047-8b99-4978-893f-3a96e869f33d name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.362623966Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbc40047-8b99-4978-893f-3a96e869f33d name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.362654053Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fbc40047-8b99-4978-893f-3a96e869f33d name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.395100552Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c51c22e-a7c8-4b06-a427-1fa7117d632b name=/runtime.v1.RuntimeService/Version
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.395193007Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c51c22e-a7c8-4b06-a427-1fa7117d632b name=/runtime.v1.RuntimeService/Version
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.396270942Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2787398e-91a4-43fc-a908-722aae7ec151 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.396737972Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144252396713393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2787398e-91a4-43fc-a908-722aae7ec151 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.397408658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc2f491c-f5c7-4a3e-9591-90367f5c61a7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.397474264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc2f491c-f5c7-4a3e-9591-90367f5c61a7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.397506196Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fc2f491c-f5c7-4a3e-9591-90367f5c61a7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.429799398Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20830deb-7488-4584-a7c4-f6b326fad01c name=/runtime.v1.RuntimeService/Version
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.429893850Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20830deb-7488-4584-a7c4-f6b326fad01c name=/runtime.v1.RuntimeService/Version
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.430980363Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd01db1f-c0f5-4691-ab62-e679bc367f71 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.431409961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144252431390969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd01db1f-c0f5-4691-ab62-e679bc367f71 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.431814179Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c67a1aa-f510-4f2b-89d5-37c3dd9c0e30 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.431912619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c67a1aa-f510-4f2b-89d5-37c3dd9c0e30 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 12:57:32 old-k8s-version-666766 crio[628]: time="2024-12-02 12:57:32.431985924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0c67a1aa-f510-4f2b-89d5-37c3dd9c0e30 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 12:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056211] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044119] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.145598] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.034204] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.629273] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.854910] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.063738] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078588] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.173629] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.134990] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.253737] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.528775] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +0.061339] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.164841] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[ +11.104852] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 2 12:53] systemd-fstab-generator[5075]: Ignoring "noauto" option for root device
	[Dec 2 12:55] systemd-fstab-generator[5352]: Ignoring "noauto" option for root device
	[  +0.070336] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:57:32 up 8 min,  0 users,  load average: 0.02, 0.20, 0.15
	Linux old-k8s-version-666766 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]: goroutine 144 [chan receive]:
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedProcessor).run(0xc0001b0690, 0xc000243920)
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000347790, 0xc000a79080)
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]: goroutine 145 [chan receive]:
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000c807e0)
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Dec 02 12:57:30 old-k8s-version-666766 kubelet[5534]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Dec 02 12:57:30 old-k8s-version-666766 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 02 12:57:30 old-k8s-version-666766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 12:57:31 old-k8s-version-666766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Dec 02 12:57:31 old-k8s-version-666766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 02 12:57:31 old-k8s-version-666766 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 02 12:57:31 old-k8s-version-666766 kubelet[5591]: I1202 12:57:31.229015    5591 server.go:416] Version: v1.20.0
	Dec 02 12:57:31 old-k8s-version-666766 kubelet[5591]: I1202 12:57:31.229515    5591 server.go:837] Client rotation is on, will bootstrap in background
	Dec 02 12:57:31 old-k8s-version-666766 kubelet[5591]: I1202 12:57:31.235179    5591 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 02 12:57:31 old-k8s-version-666766 kubelet[5591]: W1202 12:57:31.238380    5591 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 02 12:57:31 old-k8s-version-666766 kubelet[5591]: I1202 12:57:31.239541    5591 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666766 -n old-k8s-version-666766
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 2 (228.434919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-666766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (704.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-653783 --alsologtostderr -v=3
E1202 12:50:01.370399   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-653783 --alsologtostderr -v=3: exit status 82 (2m0.469437982s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-653783"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 12:49:22.572624   60358 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:49:22.572870   60358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:49:22.572880   60358 out.go:358] Setting ErrFile to fd 2...
	I1202 12:49:22.572887   60358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:49:22.573094   60358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:49:22.573322   60358 out.go:352] Setting JSON to false
	I1202 12:49:22.573424   60358 mustload.go:65] Loading cluster: default-k8s-diff-port-653783
	I1202 12:49:22.573808   60358 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:49:22.573888   60358 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/config.json ...
	I1202 12:49:22.574070   60358 mustload.go:65] Loading cluster: default-k8s-diff-port-653783
	I1202 12:49:22.574193   60358 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:49:22.574225   60358 stop.go:39] StopHost: default-k8s-diff-port-653783
	I1202 12:49:22.574649   60358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:49:22.574697   60358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:49:22.588983   60358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42971
	I1202 12:49:22.589441   60358 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:49:22.590009   60358 main.go:141] libmachine: Using API Version  1
	I1202 12:49:22.590030   60358 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:49:22.590339   60358 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:49:22.592353   60358 out.go:177] * Stopping node "default-k8s-diff-port-653783"  ...
	I1202 12:49:22.593568   60358 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1202 12:49:22.593591   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:49:22.593755   60358 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1202 12:49:22.593774   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:49:22.596442   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:49:22.596863   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:48:33 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:49:22.596887   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:49:22.597044   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:49:22.597209   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:49:22.597352   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:49:22.597502   60358 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:49:22.679069   60358 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1202 12:49:22.733516   60358 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1202 12:49:22.792683   60358 main.go:141] libmachine: Stopping "default-k8s-diff-port-653783"...
	I1202 12:49:22.792710   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 12:49:22.794313   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Stop
	I1202 12:49:22.797948   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 0/120
	I1202 12:49:23.798983   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 1/120
	I1202 12:49:24.800268   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 2/120
	I1202 12:49:25.801800   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 3/120
	I1202 12:49:26.803734   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 4/120
	I1202 12:49:27.805538   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 5/120
	I1202 12:49:28.807322   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 6/120
	I1202 12:49:29.808665   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 7/120
	I1202 12:49:30.810888   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 8/120
	I1202 12:49:31.812492   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 9/120
	I1202 12:49:32.815030   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 10/120
	I1202 12:49:33.816449   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 11/120
	I1202 12:49:34.818771   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 12/120
	I1202 12:49:35.820543   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 13/120
	I1202 12:49:36.822769   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 14/120
	I1202 12:49:37.824615   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 15/120
	I1202 12:49:38.825773   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 16/120
	I1202 12:49:39.827294   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 17/120
	I1202 12:49:40.828590   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 18/120
	I1202 12:49:41.830439   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 19/120
	I1202 12:49:42.832364   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 20/120
	I1202 12:49:43.833268   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 21/120
	I1202 12:49:44.834479   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 22/120
	I1202 12:49:45.836077   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 23/120
	I1202 12:49:46.837548   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 24/120
	I1202 12:49:47.839365   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 25/120
	I1202 12:49:48.840911   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 26/120
	I1202 12:49:49.842831   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 27/120
	I1202 12:49:50.844363   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 28/120
	I1202 12:49:51.845685   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 29/120
	I1202 12:49:52.847374   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 30/120
	I1202 12:49:53.849438   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 31/120
	I1202 12:49:54.850589   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 32/120
	I1202 12:49:55.851838   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 33/120
	I1202 12:49:56.853041   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 34/120
	I1202 12:49:57.854860   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 35/120
	I1202 12:49:58.856168   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 36/120
	I1202 12:49:59.857649   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 37/120
	I1202 12:50:00.858982   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 38/120
	I1202 12:50:01.861151   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 39/120
	I1202 12:50:02.863156   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 40/120
	I1202 12:50:03.864442   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 41/120
	I1202 12:50:04.865733   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 42/120
	I1202 12:50:05.867055   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 43/120
	I1202 12:50:06.868698   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 44/120
	I1202 12:50:07.870706   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 45/120
	I1202 12:50:08.872311   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 46/120
	I1202 12:50:09.873992   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 47/120
	I1202 12:50:10.875918   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 48/120
	I1202 12:50:11.877388   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 49/120
	I1202 12:50:12.879383   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 50/120
	I1202 12:50:13.880766   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 51/120
	I1202 12:50:14.882598   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 52/120
	I1202 12:50:15.884077   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 53/120
	I1202 12:50:16.885153   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 54/120
	I1202 12:50:17.887207   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 55/120
	I1202 12:50:18.889598   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 56/120
	I1202 12:50:19.890844   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 57/120
	I1202 12:50:20.891931   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 58/120
	I1202 12:50:21.893272   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 59/120
	I1202 12:50:22.895070   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 60/120
	I1202 12:50:23.896479   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 61/120
	I1202 12:50:24.898504   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 62/120
	I1202 12:50:25.899658   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 63/120
	I1202 12:50:26.900795   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 64/120
	I1202 12:50:27.902347   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 65/120
	I1202 12:50:28.903560   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 66/120
	I1202 12:50:29.904695   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 67/120
	I1202 12:50:30.906559   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 68/120
	I1202 12:50:31.907818   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 69/120
	I1202 12:50:32.909819   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 70/120
	I1202 12:50:33.911101   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 71/120
	I1202 12:50:34.912484   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 72/120
	I1202 12:50:35.913869   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 73/120
	I1202 12:50:36.915185   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 74/120
	I1202 12:50:37.916936   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 75/120
	I1202 12:50:38.919394   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 76/120
	I1202 12:50:39.920696   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 77/120
	I1202 12:50:40.921924   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 78/120
	I1202 12:50:41.923274   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 79/120
	I1202 12:50:42.925093   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 80/120
	I1202 12:50:43.926318   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 81/120
	I1202 12:50:44.927613   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 82/120
	I1202 12:50:45.928880   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 83/120
	I1202 12:50:46.930167   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 84/120
	I1202 12:50:47.931572   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 85/120
	I1202 12:50:48.932846   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 86/120
	I1202 12:50:49.934098   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 87/120
	I1202 12:50:50.935239   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 88/120
	I1202 12:50:51.936616   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 89/120
	I1202 12:50:52.938674   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 90/120
	I1202 12:50:53.939752   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 91/120
	I1202 12:50:54.941038   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 92/120
	I1202 12:50:55.942561   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 93/120
	I1202 12:50:56.943893   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 94/120
	I1202 12:50:57.945714   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 95/120
	I1202 12:50:58.947014   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 96/120
	I1202 12:50:59.948687   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 97/120
	I1202 12:51:00.950033   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 98/120
	I1202 12:51:01.951214   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 99/120
	I1202 12:51:02.953058   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 100/120
	I1202 12:51:03.954161   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 101/120
	I1202 12:51:04.955308   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 102/120
	I1202 12:51:05.956421   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 103/120
	I1202 12:51:06.958401   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 104/120
	I1202 12:51:07.960491   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 105/120
	I1202 12:51:08.962656   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 106/120
	I1202 12:51:09.963768   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 107/120
	I1202 12:51:10.965042   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 108/120
	I1202 12:51:11.966782   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 109/120
	I1202 12:51:12.968810   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 110/120
	I1202 12:51:13.970247   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 111/120
	I1202 12:51:14.971715   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 112/120
	I1202 12:51:15.972929   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 113/120
	I1202 12:51:16.974726   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 114/120
	I1202 12:51:17.976784   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 115/120
	I1202 12:51:18.978037   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 116/120
	I1202 12:51:19.979479   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 117/120
	I1202 12:51:20.981186   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 118/120
	I1202 12:51:21.982663   60358 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for machine to stop 119/120
	I1202 12:51:22.983065   60358 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1202 12:51:22.983122   60358 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1202 12:51:22.984821   60358 out.go:201] 
	W1202 12:51:22.986111   60358 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1202 12:51:22.986126   60358 out.go:270] * 
	* 
	W1202 12:51:22.989309   60358 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 12:51:22.990534   60358 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-653783 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783: exit status 3 (18.564024678s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:51:41.556536   60968 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.154:22: connect: no route to host
	E1202 12:51:41.556567   60968 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.154:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-653783" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783: exit status 3 (3.167676792s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:51:44.724523   61062 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.154:22: connect: no route to host
	E1202 12:51:44.724548   61062 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.154:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-653783 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-653783 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152574629s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.154:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-653783 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783: exit status 3 (3.063504061s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:51:53.940603   61126 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.154:22: connect: no route to host
	E1202 12:51:53.940634   61126 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.154:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-653783" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-953044 -n embed-certs-953044
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-02 13:03:12.816493565 +0000 UTC m=+5571.648278566
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-953044 -n embed-certs-953044
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-953044 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-953044 logs -n 25: (1.379486769s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p newest-cni-983490 --memory=2200 --alsologtostderr   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-953044            | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-983490             | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-983490                  | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-983490 --memory=2200 --alsologtostderr   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658679                  | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658679                                   | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-983490 image list                           | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	| delete  | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:49 UTC |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-666766        | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-953044                 | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666766             | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653783  | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC | 02 Dec 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC |                     |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653783       | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC | 02 Dec 24 13:02 UTC |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 12:51:53
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 12:51:53.986642   61173 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:51:53.986878   61173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:51:53.986887   61173 out.go:358] Setting ErrFile to fd 2...
	I1202 12:51:53.986891   61173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:51:53.987040   61173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:51:53.987531   61173 out.go:352] Setting JSON to false
	I1202 12:51:53.988496   61173 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5666,"bootTime":1733138248,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:51:53.988587   61173 start.go:139] virtualization: kvm guest
	I1202 12:51:53.990552   61173 out.go:177] * [default-k8s-diff-port-653783] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:51:53.991681   61173 notify.go:220] Checking for updates...
	I1202 12:51:53.991692   61173 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:51:53.992827   61173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:51:53.993900   61173 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:51:53.995110   61173 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:51:53.996273   61173 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:51:53.997326   61173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:51:53.998910   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:51:53.999556   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:53.999630   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.014837   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I1202 12:51:54.015203   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.015691   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.015717   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.016024   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.016213   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.016420   61173 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:51:54.016702   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:54.016740   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.031103   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43443
	I1202 12:51:54.031480   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.031846   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.031862   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.032152   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.032313   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.066052   61173 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 12:51:54.067269   61173 start.go:297] selected driver: kvm2
	I1202 12:51:54.067282   61173 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:51:54.067398   61173 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:51:54.068083   61173 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:51:54.068159   61173 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 12:51:54.082839   61173 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 12:51:54.083361   61173 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:51:54.083405   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:51:54.083450   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:51:54.083491   61173 start.go:340] cluster config:
	{Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:51:54.083581   61173 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:51:54.085236   61173 out.go:177] * Starting "default-k8s-diff-port-653783" primary control-plane node in "default-k8s-diff-port-653783" cluster
	I1202 12:51:54.086247   61173 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:51:54.086275   61173 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 12:51:54.086281   61173 cache.go:56] Caching tarball of preloaded images
	I1202 12:51:54.086363   61173 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 12:51:54.086377   61173 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 12:51:54.086471   61173 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/config.json ...
	I1202 12:51:54.086683   61173 start.go:360] acquireMachinesLock for default-k8s-diff-port-653783: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:51:54.086721   61173 start.go:364] duration metric: took 21.68µs to acquireMachinesLock for "default-k8s-diff-port-653783"
	I1202 12:51:54.086742   61173 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:51:54.086750   61173 fix.go:54] fixHost starting: 
	I1202 12:51:54.087016   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:54.087049   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.100439   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I1202 12:51:54.100860   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.101284   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.101305   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.101699   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.101899   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.102027   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 12:51:54.103398   61173 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653783: state=Running err=<nil>
	W1202 12:51:54.103428   61173 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:51:54.104862   61173 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-653783" VM ...
	I1202 12:51:51.250214   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:53.251543   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:55.251624   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:54.384562   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:54.397979   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:54.398032   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:54.431942   59162 cri.go:89] found id: ""
	I1202 12:51:54.431965   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.431973   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:54.431979   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:54.432024   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:54.466033   59162 cri.go:89] found id: ""
	I1202 12:51:54.466054   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.466062   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:54.466067   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:54.466116   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:54.506462   59162 cri.go:89] found id: ""
	I1202 12:51:54.506486   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.506493   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:54.506499   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:54.506545   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:54.539966   59162 cri.go:89] found id: ""
	I1202 12:51:54.539996   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.540006   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:54.540013   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:54.540068   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:54.572987   59162 cri.go:89] found id: ""
	I1202 12:51:54.573027   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.573038   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:54.573046   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:54.573107   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:54.609495   59162 cri.go:89] found id: ""
	I1202 12:51:54.609528   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.609539   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:54.609547   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:54.609593   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:54.643109   59162 cri.go:89] found id: ""
	I1202 12:51:54.643136   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.643148   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:54.643205   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:54.643279   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:54.681113   59162 cri.go:89] found id: ""
	I1202 12:51:54.681151   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.681160   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:54.681168   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:54.681180   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:54.734777   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:54.734806   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:54.748171   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:54.748196   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:54.821609   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:54.821628   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:54.821642   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:54.900306   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:54.900339   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:57.438971   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:57.454128   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:57.454187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:57.489852   59162 cri.go:89] found id: ""
	I1202 12:51:57.489877   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.489885   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:57.489890   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:57.489938   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:57.523496   59162 cri.go:89] found id: ""
	I1202 12:51:57.523515   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.523522   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:57.523528   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:57.523576   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:57.554394   59162 cri.go:89] found id: ""
	I1202 12:51:57.554417   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.554429   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:57.554436   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:57.554497   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:57.586259   59162 cri.go:89] found id: ""
	I1202 12:51:57.586281   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.586291   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:57.586298   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:57.586353   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:57.618406   59162 cri.go:89] found id: ""
	I1202 12:51:57.618427   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.618435   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:57.618440   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:57.618482   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:57.649491   59162 cri.go:89] found id: ""
	I1202 12:51:57.649517   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.649527   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:57.649532   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:57.649575   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:57.682286   59162 cri.go:89] found id: ""
	I1202 12:51:57.682306   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.682313   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:57.682319   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:57.682364   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:57.720929   59162 cri.go:89] found id: ""
	I1202 12:51:57.720956   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.720967   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:57.720977   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:57.720987   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:57.802270   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:57.802302   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:57.841214   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:57.841246   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:57.893691   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:57.893724   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:57.906616   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:57.906640   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:57.973328   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:54.153852   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:56.653113   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:54.105934   61173 machine.go:93] provisionDockerMachine start ...
	I1202 12:51:54.105950   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.106120   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:51:54.108454   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:51:54.108866   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:48:33 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:51:54.108899   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:51:54.109032   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:51:54.109170   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:51:54.109328   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:51:54.109487   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:51:54.109662   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:51:54.109863   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:51:54.109875   61173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:51:57.012461   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:51:57.751276   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.250936   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.473500   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:00.487912   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:00.487973   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:00.526513   59162 cri.go:89] found id: ""
	I1202 12:52:00.526539   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.526548   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:00.526557   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:00.526620   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:00.561483   59162 cri.go:89] found id: ""
	I1202 12:52:00.561511   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.561519   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:00.561526   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:00.561583   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:00.592435   59162 cri.go:89] found id: ""
	I1202 12:52:00.592473   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.592484   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:00.592491   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:00.592551   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:00.624686   59162 cri.go:89] found id: ""
	I1202 12:52:00.624710   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.624722   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:00.624727   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:00.624771   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:00.662610   59162 cri.go:89] found id: ""
	I1202 12:52:00.662639   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.662650   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:00.662657   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:00.662721   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:00.695972   59162 cri.go:89] found id: ""
	I1202 12:52:00.695993   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.696000   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:00.696006   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:00.696048   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:00.727200   59162 cri.go:89] found id: ""
	I1202 12:52:00.727230   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.727253   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:00.727261   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:00.727316   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:00.761510   59162 cri.go:89] found id: ""
	I1202 12:52:00.761536   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.761545   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:00.761556   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:00.761568   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:00.812287   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:00.812318   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:00.825282   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:00.825309   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:00.894016   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:00.894042   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:00.894065   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:00.972001   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:00.972034   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:59.152373   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:01.153532   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:03.653266   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.084529   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:02.751465   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:04.752349   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:03.512982   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:03.528814   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:03.528884   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:03.564137   59162 cri.go:89] found id: ""
	I1202 12:52:03.564159   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.564166   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:03.564173   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:03.564223   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:03.608780   59162 cri.go:89] found id: ""
	I1202 12:52:03.608811   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.608822   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:03.608829   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:03.608891   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:03.644906   59162 cri.go:89] found id: ""
	I1202 12:52:03.644943   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.644954   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:03.644978   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:03.645052   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:03.676732   59162 cri.go:89] found id: ""
	I1202 12:52:03.676754   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.676761   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:03.676767   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:03.676809   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:03.711338   59162 cri.go:89] found id: ""
	I1202 12:52:03.711362   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.711369   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:03.711375   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:03.711424   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:03.743657   59162 cri.go:89] found id: ""
	I1202 12:52:03.743682   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.743689   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:03.743694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:03.743737   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:03.777740   59162 cri.go:89] found id: ""
	I1202 12:52:03.777759   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.777766   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:03.777772   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:03.777818   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:03.811145   59162 cri.go:89] found id: ""
	I1202 12:52:03.811169   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.811179   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:03.811190   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:03.811204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:03.862069   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:03.862093   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:03.875133   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:03.875164   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:03.947077   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:03.947102   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:03.947114   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:04.023458   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:04.023487   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:06.562323   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:06.577498   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:06.577556   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:06.613937   59162 cri.go:89] found id: ""
	I1202 12:52:06.613962   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.613970   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:06.613976   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:06.614023   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:06.647630   59162 cri.go:89] found id: ""
	I1202 12:52:06.647655   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.647662   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:06.647667   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:06.647711   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:06.683758   59162 cri.go:89] found id: ""
	I1202 12:52:06.683783   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.683793   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:06.683800   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:06.683861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:06.722664   59162 cri.go:89] found id: ""
	I1202 12:52:06.722686   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.722694   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:06.722699   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:06.722747   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:06.756255   59162 cri.go:89] found id: ""
	I1202 12:52:06.756280   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.756290   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:06.756296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:06.756340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:06.792350   59162 cri.go:89] found id: ""
	I1202 12:52:06.792376   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.792387   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:06.792394   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:06.792450   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:06.827259   59162 cri.go:89] found id: ""
	I1202 12:52:06.827289   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.827301   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:06.827308   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:06.827367   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:06.858775   59162 cri.go:89] found id: ""
	I1202 12:52:06.858795   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.858802   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:06.858811   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:06.858821   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:06.911764   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:06.911795   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:06.925297   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:06.925326   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:06.993703   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:06.993730   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:06.993744   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:07.073657   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:07.073685   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:05.653526   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:08.152177   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:06.164438   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:07.251496   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.752479   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.611640   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:09.626141   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:09.626199   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:09.661406   59162 cri.go:89] found id: ""
	I1202 12:52:09.661425   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.661432   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:09.661439   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:09.661498   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:09.698145   59162 cri.go:89] found id: ""
	I1202 12:52:09.698173   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.698184   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:09.698191   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:09.698252   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:09.732150   59162 cri.go:89] found id: ""
	I1202 12:52:09.732178   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.732189   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:09.732197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:09.732261   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:09.768040   59162 cri.go:89] found id: ""
	I1202 12:52:09.768063   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.768070   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:09.768076   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:09.768130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:09.801038   59162 cri.go:89] found id: ""
	I1202 12:52:09.801064   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.801075   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:09.801082   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:09.801130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:09.841058   59162 cri.go:89] found id: ""
	I1202 12:52:09.841082   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.841089   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:09.841095   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:09.841137   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:09.885521   59162 cri.go:89] found id: ""
	I1202 12:52:09.885541   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.885548   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:09.885554   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:09.885602   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:09.924759   59162 cri.go:89] found id: ""
	I1202 12:52:09.924779   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.924786   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:09.924793   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:09.924804   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:09.968241   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:09.968273   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:10.020282   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:10.020315   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:10.036491   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:10.036519   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:10.113297   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:10.113324   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:10.113339   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:12.688410   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:12.705296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:12.705356   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:12.743097   59162 cri.go:89] found id: ""
	I1202 12:52:12.743119   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.743127   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:12.743133   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:12.743187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:12.778272   59162 cri.go:89] found id: ""
	I1202 12:52:12.778292   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.778299   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:12.778304   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:12.778365   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:12.816087   59162 cri.go:89] found id: ""
	I1202 12:52:12.816116   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.816127   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:12.816134   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:12.816187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:12.850192   59162 cri.go:89] found id: ""
	I1202 12:52:12.850214   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.850221   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:12.850227   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:12.850282   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:12.883325   59162 cri.go:89] found id: ""
	I1202 12:52:12.883351   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.883360   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:12.883367   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:12.883427   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:12.916121   59162 cri.go:89] found id: ""
	I1202 12:52:12.916157   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.916169   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:12.916176   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:12.916251   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:12.946704   59162 cri.go:89] found id: ""
	I1202 12:52:12.946733   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.946746   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:12.946753   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:12.946802   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:12.979010   59162 cri.go:89] found id: ""
	I1202 12:52:12.979041   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.979050   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:12.979062   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:12.979075   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:13.062141   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:13.062171   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:13.111866   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:13.111900   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:13.162470   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:13.162498   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:13.178497   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:13.178525   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:13.245199   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:10.152556   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:12.153087   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.236522   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:12.249938   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:14.750814   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:15.746327   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:15.760092   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:15.760160   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:15.797460   59162 cri.go:89] found id: ""
	I1202 12:52:15.797484   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.797495   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:15.797503   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:15.797563   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:15.829969   59162 cri.go:89] found id: ""
	I1202 12:52:15.829998   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.830009   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:15.830017   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:15.830072   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:15.862390   59162 cri.go:89] found id: ""
	I1202 12:52:15.862418   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.862428   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:15.862435   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:15.862484   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:15.895223   59162 cri.go:89] found id: ""
	I1202 12:52:15.895244   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.895251   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:15.895257   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:15.895311   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:15.933157   59162 cri.go:89] found id: ""
	I1202 12:52:15.933184   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.933192   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:15.933197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:15.933245   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:15.964387   59162 cri.go:89] found id: ""
	I1202 12:52:15.964414   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.964425   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:15.964433   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:15.964487   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:15.996803   59162 cri.go:89] found id: ""
	I1202 12:52:15.996825   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.996832   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:15.996837   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:15.996881   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:16.029364   59162 cri.go:89] found id: ""
	I1202 12:52:16.029394   59162 logs.go:282] 0 containers: []
	W1202 12:52:16.029402   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:16.029411   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:16.029422   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:16.098237   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:16.098264   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:16.098278   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:16.172386   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:16.172414   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:16.216899   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:16.216923   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:16.281565   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:16.281591   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:14.154258   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:16.652807   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:15.316450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:18.388460   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:16.751794   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:19.250295   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:18.796337   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:18.809573   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:18.809637   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:18.847965   59162 cri.go:89] found id: ""
	I1202 12:52:18.847991   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.847999   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:18.848004   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:18.848053   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:18.883714   59162 cri.go:89] found id: ""
	I1202 12:52:18.883741   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.883751   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:18.883758   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:18.883817   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:18.918581   59162 cri.go:89] found id: ""
	I1202 12:52:18.918605   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.918612   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:18.918617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:18.918672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:18.954394   59162 cri.go:89] found id: ""
	I1202 12:52:18.954426   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.954437   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:18.954443   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:18.954502   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:18.995321   59162 cri.go:89] found id: ""
	I1202 12:52:18.995347   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.995355   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:18.995361   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:18.995423   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:19.034030   59162 cri.go:89] found id: ""
	I1202 12:52:19.034055   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.034066   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:19.034073   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:19.034130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:19.073569   59162 cri.go:89] found id: ""
	I1202 12:52:19.073597   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.073609   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:19.073615   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:19.073662   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:19.112049   59162 cri.go:89] found id: ""
	I1202 12:52:19.112078   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.112090   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:19.112100   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:19.112113   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:19.180480   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:19.180502   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:19.180516   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:19.258236   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:19.258264   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:19.299035   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:19.299053   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:19.352572   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:19.352602   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:21.866524   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:21.879286   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:21.879340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:21.910463   59162 cri.go:89] found id: ""
	I1202 12:52:21.910489   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.910498   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:21.910504   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:21.910551   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:21.943130   59162 cri.go:89] found id: ""
	I1202 12:52:21.943157   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.943165   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:21.943171   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:21.943216   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:21.976969   59162 cri.go:89] found id: ""
	I1202 12:52:21.976990   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.976997   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:21.977002   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:21.977055   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:22.022113   59162 cri.go:89] found id: ""
	I1202 12:52:22.022144   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.022153   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:22.022159   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:22.022218   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:22.057387   59162 cri.go:89] found id: ""
	I1202 12:52:22.057406   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.057413   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:22.057418   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:22.057459   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:22.089832   59162 cri.go:89] found id: ""
	I1202 12:52:22.089866   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.089892   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:22.089900   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:22.089960   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:22.121703   59162 cri.go:89] found id: ""
	I1202 12:52:22.121727   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.121735   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:22.121740   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:22.121789   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:22.155076   59162 cri.go:89] found id: ""
	I1202 12:52:22.155098   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.155108   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:22.155117   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:22.155137   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:22.234831   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:22.234862   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:22.273912   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:22.273945   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:22.327932   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:22.327966   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:22.340890   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:22.340913   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:22.419371   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:19.153845   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:21.652993   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:23.653111   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:21.750980   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:24.250791   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:24.919868   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:24.935004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:24.935068   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:24.972438   59162 cri.go:89] found id: ""
	I1202 12:52:24.972466   59162 logs.go:282] 0 containers: []
	W1202 12:52:24.972474   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:24.972480   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:24.972525   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:25.009282   59162 cri.go:89] found id: ""
	I1202 12:52:25.009310   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.009320   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:25.009329   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:25.009391   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:25.043227   59162 cri.go:89] found id: ""
	I1202 12:52:25.043254   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.043262   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:25.043267   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:25.043318   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:25.079167   59162 cri.go:89] found id: ""
	I1202 12:52:25.079191   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.079198   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:25.079204   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:25.079263   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:25.110308   59162 cri.go:89] found id: ""
	I1202 12:52:25.110332   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.110340   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:25.110346   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:25.110388   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:25.143804   59162 cri.go:89] found id: ""
	I1202 12:52:25.143830   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.143840   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:25.143846   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:25.143903   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:25.178114   59162 cri.go:89] found id: ""
	I1202 12:52:25.178140   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.178147   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:25.178155   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:25.178204   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:25.212632   59162 cri.go:89] found id: ""
	I1202 12:52:25.212665   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.212675   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:25.212684   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:25.212696   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:25.267733   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:25.267761   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:25.281025   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:25.281048   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:25.346497   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:25.346520   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:25.346531   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:25.437435   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:25.437469   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:27.979493   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:27.993542   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:27.993615   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:28.030681   59162 cri.go:89] found id: ""
	I1202 12:52:28.030705   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.030712   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:28.030718   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:28.030771   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:28.063991   59162 cri.go:89] found id: ""
	I1202 12:52:28.064019   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.064027   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:28.064032   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:28.064080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:28.097983   59162 cri.go:89] found id: ""
	I1202 12:52:28.098018   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.098029   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:28.098038   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:28.098098   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:28.131956   59162 cri.go:89] found id: ""
	I1202 12:52:28.131977   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.131987   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:28.131995   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:28.132071   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:28.170124   59162 cri.go:89] found id: ""
	I1202 12:52:28.170160   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.170171   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:28.170177   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:28.170238   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:28.203127   59162 cri.go:89] found id: ""
	I1202 12:52:28.203149   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.203157   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:28.203163   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:28.203216   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:28.240056   59162 cri.go:89] found id: ""
	I1202 12:52:28.240081   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.240088   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:28.240094   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:28.240142   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:28.276673   59162 cri.go:89] found id: ""
	I1202 12:52:28.276699   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.276710   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:28.276720   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:28.276733   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:28.333435   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:28.333470   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:28.347465   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:28.347491   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:52:26.153244   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:28.153689   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:27.508437   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:26.250897   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:28.250951   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:30.252183   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	W1202 12:52:28.432745   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:28.432777   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:28.432792   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:28.515984   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:28.516017   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:31.057069   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:31.070021   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:31.070084   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:31.106501   59162 cri.go:89] found id: ""
	I1202 12:52:31.106530   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.106540   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:31.106547   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:31.106606   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:31.141190   59162 cri.go:89] found id: ""
	I1202 12:52:31.141219   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.141230   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:31.141238   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:31.141298   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:31.176050   59162 cri.go:89] found id: ""
	I1202 12:52:31.176077   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.176087   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:31.176099   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:31.176169   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:31.211740   59162 cri.go:89] found id: ""
	I1202 12:52:31.211769   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.211780   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:31.211786   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:31.211831   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:31.248949   59162 cri.go:89] found id: ""
	I1202 12:52:31.248974   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.248983   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:31.248990   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:31.249044   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:31.284687   59162 cri.go:89] found id: ""
	I1202 12:52:31.284709   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.284717   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:31.284723   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:31.284765   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:31.317972   59162 cri.go:89] found id: ""
	I1202 12:52:31.317997   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.318004   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:31.318010   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:31.318065   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:31.354866   59162 cri.go:89] found id: ""
	I1202 12:52:31.354893   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.354904   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:31.354914   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:31.354927   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:31.425168   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:31.425191   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:31.425202   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:31.508169   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:31.508204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:31.547193   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:31.547220   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:31.601864   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:31.601892   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:30.653415   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:33.153132   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:30.580471   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:32.752026   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:35.251960   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:34.115652   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:34.131644   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:34.131695   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:34.174473   59162 cri.go:89] found id: ""
	I1202 12:52:34.174500   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.174510   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:34.174518   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:34.174571   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:34.226162   59162 cri.go:89] found id: ""
	I1202 12:52:34.226190   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.226201   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:34.226208   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:34.226271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:34.269202   59162 cri.go:89] found id: ""
	I1202 12:52:34.269230   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.269240   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:34.269248   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:34.269327   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:34.304571   59162 cri.go:89] found id: ""
	I1202 12:52:34.304604   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.304615   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:34.304621   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:34.304670   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:34.339285   59162 cri.go:89] found id: ""
	I1202 12:52:34.339316   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.339327   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:34.339334   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:34.339401   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:34.374919   59162 cri.go:89] found id: ""
	I1202 12:52:34.374952   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.374964   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:34.374973   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:34.375035   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:34.409292   59162 cri.go:89] found id: ""
	I1202 12:52:34.409319   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.409330   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:34.409337   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:34.409404   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:34.442536   59162 cri.go:89] found id: ""
	I1202 12:52:34.442561   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.442568   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:34.442576   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:34.442587   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:34.494551   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:34.494582   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:34.508684   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:34.508713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:34.572790   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:34.572816   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:34.572835   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:34.649327   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:34.649358   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:37.190648   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:37.203913   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:37.203966   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:37.243165   59162 cri.go:89] found id: ""
	I1202 12:52:37.243186   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.243194   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:37.243199   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:37.243246   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:37.279317   59162 cri.go:89] found id: ""
	I1202 12:52:37.279343   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.279351   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:37.279356   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:37.279411   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:37.312655   59162 cri.go:89] found id: ""
	I1202 12:52:37.312684   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.312693   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:37.312702   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:37.312748   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:37.346291   59162 cri.go:89] found id: ""
	I1202 12:52:37.346319   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.346328   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:37.346334   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:37.346382   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:37.381534   59162 cri.go:89] found id: ""
	I1202 12:52:37.381555   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.381563   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:37.381569   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:37.381621   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:37.416990   59162 cri.go:89] found id: ""
	I1202 12:52:37.417013   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.417020   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:37.417026   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:37.417083   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:37.451149   59162 cri.go:89] found id: ""
	I1202 12:52:37.451174   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.451182   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:37.451187   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:37.451233   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:37.485902   59162 cri.go:89] found id: ""
	I1202 12:52:37.485929   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.485940   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:37.485950   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:37.485970   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:37.541615   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:37.541645   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:37.554846   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:37.554866   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:37.622432   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:37.622457   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:37.622471   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:37.708793   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:37.708832   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:35.154170   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:37.653220   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:36.660437   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:37.751726   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:40.252016   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:40.246822   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:40.260893   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:40.260959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:40.294743   59162 cri.go:89] found id: ""
	I1202 12:52:40.294773   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.294782   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:40.294789   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:40.294845   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:40.338523   59162 cri.go:89] found id: ""
	I1202 12:52:40.338557   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.338570   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:40.338577   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:40.338628   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:40.373134   59162 cri.go:89] found id: ""
	I1202 12:52:40.373162   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.373170   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:40.373176   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:40.373225   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:40.410197   59162 cri.go:89] found id: ""
	I1202 12:52:40.410233   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.410247   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:40.410256   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:40.410333   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:40.442497   59162 cri.go:89] found id: ""
	I1202 12:52:40.442521   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.442530   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:40.442536   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:40.442597   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:40.477835   59162 cri.go:89] found id: ""
	I1202 12:52:40.477863   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.477872   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:40.477879   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:40.477936   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:40.511523   59162 cri.go:89] found id: ""
	I1202 12:52:40.511547   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.511559   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:40.511567   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:40.511628   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:40.545902   59162 cri.go:89] found id: ""
	I1202 12:52:40.545928   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.545942   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:40.545962   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:40.545976   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:40.595638   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:40.595669   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:40.609023   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:40.609043   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:40.680826   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:40.680848   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:40.680866   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:40.756551   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:40.756579   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:43.295761   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:43.308764   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:43.308836   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:43.343229   59162 cri.go:89] found id: ""
	I1202 12:52:43.343258   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.343268   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:43.343276   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:43.343335   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:39.653604   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:42.152871   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:39.732455   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:42.750873   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:45.250740   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:43.376841   59162 cri.go:89] found id: ""
	I1202 12:52:43.376861   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.376868   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:43.376874   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:43.376918   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:43.415013   59162 cri.go:89] found id: ""
	I1202 12:52:43.415033   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.415041   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:43.415046   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:43.415094   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:43.451563   59162 cri.go:89] found id: ""
	I1202 12:52:43.451590   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.451601   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:43.451608   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:43.451658   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:43.492838   59162 cri.go:89] found id: ""
	I1202 12:52:43.492859   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.492867   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:43.492872   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:43.492934   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:43.531872   59162 cri.go:89] found id: ""
	I1202 12:52:43.531898   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.531908   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:43.531914   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:43.531957   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:43.566235   59162 cri.go:89] found id: ""
	I1202 12:52:43.566260   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.566270   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:43.566277   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:43.566332   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:43.601502   59162 cri.go:89] found id: ""
	I1202 12:52:43.601531   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.601542   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:43.601553   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:43.601567   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:43.650984   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:43.651012   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:43.664273   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:43.664296   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:43.735791   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:43.735819   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:43.735833   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:43.817824   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:43.817861   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:46.356130   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:46.368755   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:46.368835   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:46.404552   59162 cri.go:89] found id: ""
	I1202 12:52:46.404574   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.404582   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:46.404588   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:46.404640   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:46.438292   59162 cri.go:89] found id: ""
	I1202 12:52:46.438318   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.438329   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:46.438337   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:46.438397   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:46.471614   59162 cri.go:89] found id: ""
	I1202 12:52:46.471636   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.471643   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:46.471649   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:46.471752   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:46.502171   59162 cri.go:89] found id: ""
	I1202 12:52:46.502193   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.502201   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:46.502207   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:46.502250   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:46.533820   59162 cri.go:89] found id: ""
	I1202 12:52:46.533842   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.533851   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:46.533859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:46.533914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:46.566891   59162 cri.go:89] found id: ""
	I1202 12:52:46.566918   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.566928   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:46.566936   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:46.566980   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:46.599112   59162 cri.go:89] found id: ""
	I1202 12:52:46.599143   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.599154   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:46.599161   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:46.599215   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:46.630794   59162 cri.go:89] found id: ""
	I1202 12:52:46.630837   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.630849   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:46.630860   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:46.630876   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:46.644180   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:46.644210   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:46.705881   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:46.705921   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:46.705936   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:46.781327   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:46.781359   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:46.820042   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:46.820072   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:44.654330   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:47.152273   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:45.816427   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:48.884464   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:47.751118   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:49.752726   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:49.368930   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:49.381506   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:49.381556   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:49.417928   59162 cri.go:89] found id: ""
	I1202 12:52:49.417955   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.417965   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:49.417977   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:49.418034   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:49.450248   59162 cri.go:89] found id: ""
	I1202 12:52:49.450276   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.450286   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:49.450295   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:49.450366   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:49.484288   59162 cri.go:89] found id: ""
	I1202 12:52:49.484311   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.484318   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:49.484323   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:49.484372   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:49.518565   59162 cri.go:89] found id: ""
	I1202 12:52:49.518585   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.518595   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:49.518602   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:49.518650   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:49.552524   59162 cri.go:89] found id: ""
	I1202 12:52:49.552549   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.552556   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:49.552561   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:49.552609   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:49.586570   59162 cri.go:89] found id: ""
	I1202 12:52:49.586599   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.586610   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:49.586617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:49.586672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:49.622561   59162 cri.go:89] found id: ""
	I1202 12:52:49.622590   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.622601   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:49.622609   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:49.622666   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:49.659092   59162 cri.go:89] found id: ""
	I1202 12:52:49.659117   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.659129   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:49.659152   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:49.659170   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:49.672461   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:49.672491   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:49.738609   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:49.738637   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:49.738670   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:49.820458   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:49.820488   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:49.860240   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:49.860269   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:52.411571   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:52.425037   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:52.425106   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:52.458215   59162 cri.go:89] found id: ""
	I1202 12:52:52.458244   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.458255   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:52.458262   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:52.458316   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:52.491781   59162 cri.go:89] found id: ""
	I1202 12:52:52.491809   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.491820   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:52.491827   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:52.491879   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:52.528829   59162 cri.go:89] found id: ""
	I1202 12:52:52.528855   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.528864   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:52.528870   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:52.528914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:52.560930   59162 cri.go:89] found id: ""
	I1202 12:52:52.560957   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.560965   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:52.560971   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:52.561021   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:52.594102   59162 cri.go:89] found id: ""
	I1202 12:52:52.594139   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.594152   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:52.594160   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:52.594222   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:52.627428   59162 cri.go:89] found id: ""
	I1202 12:52:52.627452   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.627460   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:52.627465   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:52.627529   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:52.659143   59162 cri.go:89] found id: ""
	I1202 12:52:52.659167   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.659175   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:52.659180   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:52.659230   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:52.691603   59162 cri.go:89] found id: ""
	I1202 12:52:52.691625   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.691632   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:52.691640   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:52.691651   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:52.741989   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:52.742016   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:52.755769   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:52.755790   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:52.826397   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:52.826418   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:52.826431   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:52.904705   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:52.904734   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:49.653476   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:52.152372   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:51.755127   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:54.252182   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:55.449363   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:55.462294   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:55.462350   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:55.500829   59162 cri.go:89] found id: ""
	I1202 12:52:55.500856   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.500865   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:55.500871   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:55.500927   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:55.533890   59162 cri.go:89] found id: ""
	I1202 12:52:55.533920   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.533931   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:55.533942   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:55.533998   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:55.566686   59162 cri.go:89] found id: ""
	I1202 12:52:55.566715   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.566725   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:55.566736   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:55.566790   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:55.598330   59162 cri.go:89] found id: ""
	I1202 12:52:55.598357   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.598367   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:55.598374   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:55.598429   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:55.630648   59162 cri.go:89] found id: ""
	I1202 12:52:55.630676   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.630686   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:55.630694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:55.630755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:55.664611   59162 cri.go:89] found id: ""
	I1202 12:52:55.664633   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.664640   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:55.664645   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:55.664687   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:55.697762   59162 cri.go:89] found id: ""
	I1202 12:52:55.697789   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.697797   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:55.697803   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:55.697853   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:55.735239   59162 cri.go:89] found id: ""
	I1202 12:52:55.735263   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.735271   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:55.735279   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:55.735292   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:55.805187   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:55.805217   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:55.805233   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:55.888420   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:55.888452   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:55.927535   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:55.927561   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:55.976883   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:55.976909   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:54.152753   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:56.154364   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.654202   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:54.968436   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:58.036631   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:56.750816   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.752427   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.490700   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:58.504983   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:58.505053   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:58.541332   59162 cri.go:89] found id: ""
	I1202 12:52:58.541352   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.541359   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:58.541365   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:58.541409   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:58.579437   59162 cri.go:89] found id: ""
	I1202 12:52:58.579459   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.579466   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:58.579472   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:58.579521   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:58.617374   59162 cri.go:89] found id: ""
	I1202 12:52:58.617406   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.617417   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:58.617425   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:58.617486   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:58.653242   59162 cri.go:89] found id: ""
	I1202 12:52:58.653269   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.653280   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:58.653287   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:58.653345   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:58.686171   59162 cri.go:89] found id: ""
	I1202 12:52:58.686201   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.686210   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:58.686215   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:58.686262   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:58.719934   59162 cri.go:89] found id: ""
	I1202 12:52:58.719956   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.719966   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:58.719974   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:58.720030   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:58.759587   59162 cri.go:89] found id: ""
	I1202 12:52:58.759610   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.759619   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:58.759626   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:58.759678   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:58.790885   59162 cri.go:89] found id: ""
	I1202 12:52:58.790908   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.790915   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:58.790922   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:58.790934   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:58.840192   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:58.840220   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:58.853639   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:58.853663   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:58.924643   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:58.924669   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:58.924679   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:59.013916   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:59.013945   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:01.552305   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:01.565577   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:01.565642   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:01.598261   59162 cri.go:89] found id: ""
	I1202 12:53:01.598294   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.598304   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:01.598310   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:01.598377   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:01.631527   59162 cri.go:89] found id: ""
	I1202 12:53:01.631556   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.631565   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:01.631570   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:01.631631   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:01.670788   59162 cri.go:89] found id: ""
	I1202 12:53:01.670812   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.670820   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:01.670826   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:01.670880   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:01.708801   59162 cri.go:89] found id: ""
	I1202 12:53:01.708828   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.708838   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:01.708846   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:01.708914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:01.746053   59162 cri.go:89] found id: ""
	I1202 12:53:01.746074   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.746083   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:01.746120   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:01.746184   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:01.780873   59162 cri.go:89] found id: ""
	I1202 12:53:01.780894   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.780901   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:01.780907   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:01.780951   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:01.817234   59162 cri.go:89] found id: ""
	I1202 12:53:01.817259   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.817269   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:01.817276   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:01.817335   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:01.850277   59162 cri.go:89] found id: ""
	I1202 12:53:01.850302   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.850317   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:01.850327   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:01.850342   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:01.933014   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:01.933055   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:01.971533   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:01.971562   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:02.020280   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:02.020311   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:02.034786   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:02.034814   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:02.104013   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:01.152305   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:03.153925   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:01.250308   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:03.250937   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:05.751259   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:04.604595   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:04.618004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:04.618057   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:04.651388   59162 cri.go:89] found id: ""
	I1202 12:53:04.651414   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.651428   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:04.651436   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:04.651495   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:04.686973   59162 cri.go:89] found id: ""
	I1202 12:53:04.686998   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.687005   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:04.687019   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:04.687063   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:04.720630   59162 cri.go:89] found id: ""
	I1202 12:53:04.720654   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.720661   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:04.720667   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:04.720724   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:04.754657   59162 cri.go:89] found id: ""
	I1202 12:53:04.754682   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.754689   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:04.754694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:04.754746   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:04.787583   59162 cri.go:89] found id: ""
	I1202 12:53:04.787611   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.787621   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:04.787628   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:04.787686   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:04.818962   59162 cri.go:89] found id: ""
	I1202 12:53:04.818988   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.818999   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:04.819006   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:04.819059   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:04.852015   59162 cri.go:89] found id: ""
	I1202 12:53:04.852035   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.852042   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:04.852047   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:04.852097   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:04.886272   59162 cri.go:89] found id: ""
	I1202 12:53:04.886294   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.886301   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:04.886309   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:04.886320   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:04.934682   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:04.934712   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:04.947889   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:04.947911   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:05.018970   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:05.018995   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:05.019010   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:05.098203   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:05.098233   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:07.637320   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:07.650643   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:07.650706   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:07.683468   59162 cri.go:89] found id: ""
	I1202 12:53:07.683491   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.683499   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:07.683504   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:07.683565   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:07.719765   59162 cri.go:89] found id: ""
	I1202 12:53:07.719792   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.719799   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:07.719805   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:07.719855   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:07.760939   59162 cri.go:89] found id: ""
	I1202 12:53:07.760986   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.760996   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:07.761004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:07.761066   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:07.799175   59162 cri.go:89] found id: ""
	I1202 12:53:07.799219   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.799231   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:07.799239   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:07.799300   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:07.831957   59162 cri.go:89] found id: ""
	I1202 12:53:07.831987   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.831999   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:07.832007   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:07.832067   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:07.865982   59162 cri.go:89] found id: ""
	I1202 12:53:07.866008   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.866015   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:07.866022   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:07.866080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:07.903443   59162 cri.go:89] found id: ""
	I1202 12:53:07.903467   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.903477   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:07.903484   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:07.903541   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:07.939268   59162 cri.go:89] found id: ""
	I1202 12:53:07.939293   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.939300   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:07.939310   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:07.939324   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:07.952959   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:07.952984   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:08.039178   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:08.039207   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:08.039223   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:08.121432   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:08.121469   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:08.164739   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:08.164767   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:05.652537   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:07.652894   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:04.116377   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:07.188477   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:08.250489   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:10.250657   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:10.718599   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:10.731079   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:10.731154   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:10.767605   59162 cri.go:89] found id: ""
	I1202 12:53:10.767626   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.767633   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:10.767639   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:10.767689   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:10.800464   59162 cri.go:89] found id: ""
	I1202 12:53:10.800483   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.800491   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:10.800496   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:10.800554   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:10.840808   59162 cri.go:89] found id: ""
	I1202 12:53:10.840836   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.840853   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:10.840859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:10.840922   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:10.877653   59162 cri.go:89] found id: ""
	I1202 12:53:10.877681   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.877690   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:10.877698   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:10.877755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:10.915849   59162 cri.go:89] found id: ""
	I1202 12:53:10.915873   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.915883   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:10.915891   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:10.915953   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:10.948652   59162 cri.go:89] found id: ""
	I1202 12:53:10.948680   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.948691   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:10.948697   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:10.948755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:10.983126   59162 cri.go:89] found id: ""
	I1202 12:53:10.983154   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.983165   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:10.983172   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:10.983232   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:11.015350   59162 cri.go:89] found id: ""
	I1202 12:53:11.015378   59162 logs.go:282] 0 containers: []
	W1202 12:53:11.015390   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:11.015400   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:11.015414   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:11.028713   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:11.028737   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:11.095904   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:11.095932   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:11.095950   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:11.179078   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:11.179114   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:11.216075   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:11.216106   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:09.653482   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:12.152117   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:13.272450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:12.750358   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:14.751316   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:13.774975   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:13.787745   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:13.787804   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:13.821793   59162 cri.go:89] found id: ""
	I1202 12:53:13.821824   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.821834   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:13.821840   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:13.821885   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:13.854831   59162 cri.go:89] found id: ""
	I1202 12:53:13.854855   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.854864   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:13.854871   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:13.854925   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:13.885113   59162 cri.go:89] found id: ""
	I1202 12:53:13.885142   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.885149   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:13.885155   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:13.885201   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:13.915811   59162 cri.go:89] found id: ""
	I1202 12:53:13.915841   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.915851   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:13.915859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:13.915914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:13.948908   59162 cri.go:89] found id: ""
	I1202 12:53:13.948936   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.948946   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:13.948953   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:13.949016   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:13.986502   59162 cri.go:89] found id: ""
	I1202 12:53:13.986531   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.986540   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:13.986548   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:13.986607   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:14.018182   59162 cri.go:89] found id: ""
	I1202 12:53:14.018210   59162 logs.go:282] 0 containers: []
	W1202 12:53:14.018221   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:14.018229   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:14.018287   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:14.054185   59162 cri.go:89] found id: ""
	I1202 12:53:14.054221   59162 logs.go:282] 0 containers: []
	W1202 12:53:14.054233   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:14.054244   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:14.054272   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:14.131353   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:14.131381   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:14.131402   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:14.212787   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:14.212822   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:14.254043   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:14.254073   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:14.309591   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:14.309620   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:16.824827   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:16.838150   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:16.838210   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:16.871550   59162 cri.go:89] found id: ""
	I1202 12:53:16.871570   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.871577   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:16.871582   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:16.871625   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:16.908736   59162 cri.go:89] found id: ""
	I1202 12:53:16.908766   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.908775   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:16.908781   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:16.908844   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:16.941404   59162 cri.go:89] found id: ""
	I1202 12:53:16.941427   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.941437   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:16.941444   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:16.941500   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:16.971984   59162 cri.go:89] found id: ""
	I1202 12:53:16.972011   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.972023   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:16.972030   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:16.972079   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:17.004573   59162 cri.go:89] found id: ""
	I1202 12:53:17.004596   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.004607   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:17.004614   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:17.004661   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:17.037171   59162 cri.go:89] found id: ""
	I1202 12:53:17.037199   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.037210   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:17.037218   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:17.037271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:17.070862   59162 cri.go:89] found id: ""
	I1202 12:53:17.070888   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.070899   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:17.070906   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:17.070959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:17.102642   59162 cri.go:89] found id: ""
	I1202 12:53:17.102668   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.102678   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:17.102688   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:17.102701   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:17.182590   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:17.182623   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:17.224313   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:17.224346   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:17.272831   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:17.272855   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:17.286217   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:17.286240   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:17.357274   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:14.153570   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:16.651955   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:18.654103   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:16.340429   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:17.252036   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:19.751295   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:19.858294   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:19.871731   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:19.871787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:19.906270   59162 cri.go:89] found id: ""
	I1202 12:53:19.906290   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.906297   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:19.906303   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:19.906345   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:19.937769   59162 cri.go:89] found id: ""
	I1202 12:53:19.937790   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.937797   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:19.937802   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:19.937845   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:19.971667   59162 cri.go:89] found id: ""
	I1202 12:53:19.971689   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.971706   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:19.971714   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:19.971787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:20.005434   59162 cri.go:89] found id: ""
	I1202 12:53:20.005455   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.005461   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:20.005467   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:20.005512   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:20.041817   59162 cri.go:89] found id: ""
	I1202 12:53:20.041839   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.041848   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:20.041856   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:20.041906   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:20.073923   59162 cri.go:89] found id: ""
	I1202 12:53:20.073946   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.073958   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:20.073966   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:20.074026   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:20.107360   59162 cri.go:89] found id: ""
	I1202 12:53:20.107398   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.107409   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:20.107416   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:20.107479   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:20.153919   59162 cri.go:89] found id: ""
	I1202 12:53:20.153942   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.153952   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:20.153963   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:20.153977   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:20.211581   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:20.211610   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:20.227589   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:20.227615   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:20.305225   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:20.305250   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:20.305265   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:20.382674   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:20.382713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:22.924662   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:22.940038   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:22.940101   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:22.984768   59162 cri.go:89] found id: ""
	I1202 12:53:22.984795   59162 logs.go:282] 0 containers: []
	W1202 12:53:22.984806   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:22.984815   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:22.984876   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:23.024159   59162 cri.go:89] found id: ""
	I1202 12:53:23.024180   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.024188   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:23.024194   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:23.024254   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:23.059929   59162 cri.go:89] found id: ""
	I1202 12:53:23.059948   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.059956   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:23.059961   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:23.060003   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:23.093606   59162 cri.go:89] found id: ""
	I1202 12:53:23.093627   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.093633   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:23.093639   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:23.093689   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:23.127868   59162 cri.go:89] found id: ""
	I1202 12:53:23.127893   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.127904   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:23.127910   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:23.127965   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:23.164988   59162 cri.go:89] found id: ""
	I1202 12:53:23.165006   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.165013   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:23.165018   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:23.165058   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:23.196389   59162 cri.go:89] found id: ""
	I1202 12:53:23.196412   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.196423   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:23.196430   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:23.196481   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:23.229337   59162 cri.go:89] found id: ""
	I1202 12:53:23.229358   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.229366   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:23.229376   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:23.229404   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:23.284041   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:23.284066   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:23.297861   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:23.297884   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:53:21.152126   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:23.154090   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:22.420399   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:22.250790   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:24.252122   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	W1202 12:53:23.364113   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:23.364131   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:23.364142   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:23.446244   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:23.446273   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:25.986668   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:25.998953   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:25.999013   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:26.034844   59162 cri.go:89] found id: ""
	I1202 12:53:26.034868   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.034876   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:26.034883   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:26.034938   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:26.067050   59162 cri.go:89] found id: ""
	I1202 12:53:26.067076   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.067083   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:26.067089   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:26.067152   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:26.098705   59162 cri.go:89] found id: ""
	I1202 12:53:26.098735   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.098746   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:26.098754   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:26.098812   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:26.131283   59162 cri.go:89] found id: ""
	I1202 12:53:26.131312   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.131321   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:26.131327   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:26.131379   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:26.164905   59162 cri.go:89] found id: ""
	I1202 12:53:26.164933   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.164943   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:26.164950   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:26.165009   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:26.196691   59162 cri.go:89] found id: ""
	I1202 12:53:26.196715   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.196724   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:26.196732   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:26.196789   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:26.227341   59162 cri.go:89] found id: ""
	I1202 12:53:26.227364   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.227374   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:26.227380   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:26.227436   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:26.260569   59162 cri.go:89] found id: ""
	I1202 12:53:26.260589   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.260597   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:26.260606   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:26.260619   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:26.313150   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:26.313175   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:26.327732   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:26.327762   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:26.392748   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:26.392768   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:26.392778   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:26.474456   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:26.474484   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:24.146771   58902 pod_ready.go:82] duration metric: took 4m0.000100995s for pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace to be "Ready" ...
	E1202 12:53:24.146796   58902 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace to be "Ready" (will not retry!)
	I1202 12:53:24.146811   58902 pod_ready.go:39] duration metric: took 4m6.027386938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:53:24.146852   58902 kubeadm.go:597] duration metric: took 4m15.570212206s to restartPrimaryControlPlane
	W1202 12:53:24.146901   58902 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 12:53:24.146926   58902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:53:25.492478   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:26.253906   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:28.752313   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:29.018514   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:29.032328   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:29.032457   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:29.067696   59162 cri.go:89] found id: ""
	I1202 12:53:29.067720   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.067732   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:29.067738   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:29.067794   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:29.101076   59162 cri.go:89] found id: ""
	I1202 12:53:29.101096   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.101103   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:29.101108   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:29.101150   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:29.136446   59162 cri.go:89] found id: ""
	I1202 12:53:29.136473   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.136483   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:29.136489   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:29.136552   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:29.170820   59162 cri.go:89] found id: ""
	I1202 12:53:29.170849   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.170860   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:29.170868   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:29.170931   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:29.205972   59162 cri.go:89] found id: ""
	I1202 12:53:29.206001   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.206012   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:29.206020   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:29.206086   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:29.242118   59162 cri.go:89] found id: ""
	I1202 12:53:29.242155   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.242165   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:29.242172   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:29.242222   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:29.281377   59162 cri.go:89] found id: ""
	I1202 12:53:29.281405   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.281417   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:29.281426   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:29.281487   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:29.316350   59162 cri.go:89] found id: ""
	I1202 12:53:29.316381   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.316393   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:29.316404   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:29.316418   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:29.392609   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:29.392648   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:29.430777   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:29.430804   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:29.484157   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:29.484190   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:29.498434   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:29.498457   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:29.568203   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:32.069043   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:32.081796   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:32.081867   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:32.115767   59162 cri.go:89] found id: ""
	I1202 12:53:32.115789   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.115797   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:32.115802   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:32.115861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:32.145962   59162 cri.go:89] found id: ""
	I1202 12:53:32.145984   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.145992   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:32.145999   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:32.146046   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:32.177709   59162 cri.go:89] found id: ""
	I1202 12:53:32.177734   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.177744   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:32.177752   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:32.177796   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:32.211897   59162 cri.go:89] found id: ""
	I1202 12:53:32.211921   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.211930   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:32.211937   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:32.211994   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:32.244401   59162 cri.go:89] found id: ""
	I1202 12:53:32.244425   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.244434   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:32.244442   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:32.244503   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:32.278097   59162 cri.go:89] found id: ""
	I1202 12:53:32.278123   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.278140   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:32.278151   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:32.278210   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:32.312740   59162 cri.go:89] found id: ""
	I1202 12:53:32.312774   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.312785   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:32.312793   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:32.312860   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:32.345849   59162 cri.go:89] found id: ""
	I1202 12:53:32.345878   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.345889   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:32.345901   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:32.345917   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:32.395961   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:32.395998   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:32.409582   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:32.409609   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:32.473717   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:32.473746   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:32.473763   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:32.548547   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:32.548580   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:31.572430   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:31.251492   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:33.251616   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:35.750762   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:35.088628   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:35.102152   59162 kubeadm.go:597] duration metric: took 4m2.014751799s to restartPrimaryControlPlane
	W1202 12:53:35.102217   59162 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 12:53:35.102244   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:53:36.768528   59162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.666262663s)
	I1202 12:53:36.768601   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:53:36.783104   59162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:53:36.792966   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:53:36.802188   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:53:36.802205   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:53:36.802234   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:53:36.811253   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:53:36.811290   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:53:36.820464   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:53:36.829386   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:53:36.829426   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:53:36.838814   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:53:36.847241   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:53:36.847272   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:53:36.856295   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:53:36.864892   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:53:36.864929   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:53:36.873699   59162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:53:37.076297   59162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:53:34.644489   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:38.250676   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:40.250779   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:40.724427   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:43.796493   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:42.251341   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:44.751292   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:50.547760   58902 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.400809303s)
	I1202 12:53:50.547840   58902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:53:50.564051   58902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:53:50.573674   58902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:53:50.582945   58902 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:53:50.582965   58902 kubeadm.go:157] found existing configuration files:
	
	I1202 12:53:50.582998   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:53:50.591979   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:53:50.592030   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:53:50.601043   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:53:50.609896   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:53:50.609945   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:53:50.618918   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:53:50.627599   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:53:50.627634   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:53:50.636459   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:53:50.644836   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:53:50.644880   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:53:50.653742   58902 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:53:50.698104   58902 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 12:53:50.698187   58902 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:53:50.811202   58902 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:53:50.811340   58902 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:53:50.811466   58902 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 12:53:50.822002   58902 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:53:47.252492   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:49.750168   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:50.823836   58902 out.go:235]   - Generating certificates and keys ...
	I1202 12:53:50.823933   58902 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:53:50.824031   58902 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:53:50.824141   58902 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:53:50.824223   58902 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:53:50.824328   58902 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:53:50.824402   58902 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:53:50.824500   58902 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:53:50.824583   58902 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:53:50.824697   58902 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:53:50.824826   58902 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:53:50.824896   58902 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:53:50.824984   58902 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:53:50.912363   58902 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:53:50.997719   58902 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 12:53:51.181182   58902 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:53:51.424413   58902 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:53:51.526033   58902 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:53:51.526547   58902 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:53:51.528947   58902 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:53:51.530665   58902 out.go:235]   - Booting up control plane ...
	I1202 12:53:51.530761   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:53:51.530862   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:53:51.530946   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:53:51.551867   58902 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:53:51.557869   58902 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:53:51.557960   58902 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:53:51.690048   58902 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 12:53:51.690190   58902 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 12:53:52.190616   58902 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.56624ms
	I1202 12:53:52.190735   58902 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 12:53:49.876477   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:52.948470   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:51.752318   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:54.250701   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:57.192620   58902 kubeadm.go:310] [api-check] The API server is healthy after 5.001974319s
	I1202 12:53:57.205108   58902 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 12:53:57.217398   58902 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 12:53:57.241642   58902 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 12:53:57.241842   58902 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-953044 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 12:53:57.252962   58902 kubeadm.go:310] [bootstrap-token] Using token: kqbw67.r50dkuvxntafmbtm
	I1202 12:53:57.254175   58902 out.go:235]   - Configuring RBAC rules ...
	I1202 12:53:57.254282   58902 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 12:53:57.258707   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 12:53:57.265127   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 12:53:57.268044   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 12:53:57.273630   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 12:53:57.276921   58902 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 12:53:57.598936   58902 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 12:53:58.031759   58902 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 12:53:58.598943   58902 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 12:53:58.599838   58902 kubeadm.go:310] 
	I1202 12:53:58.599900   58902 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 12:53:58.599927   58902 kubeadm.go:310] 
	I1202 12:53:58.600020   58902 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 12:53:58.600031   58902 kubeadm.go:310] 
	I1202 12:53:58.600067   58902 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 12:53:58.600150   58902 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 12:53:58.600249   58902 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 12:53:58.600266   58902 kubeadm.go:310] 
	I1202 12:53:58.600343   58902 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 12:53:58.600353   58902 kubeadm.go:310] 
	I1202 12:53:58.600418   58902 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 12:53:58.600429   58902 kubeadm.go:310] 
	I1202 12:53:58.600500   58902 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 12:53:58.600602   58902 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 12:53:58.600694   58902 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 12:53:58.600704   58902 kubeadm.go:310] 
	I1202 12:53:58.600878   58902 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 12:53:58.600996   58902 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 12:53:58.601008   58902 kubeadm.go:310] 
	I1202 12:53:58.601121   58902 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kqbw67.r50dkuvxntafmbtm \
	I1202 12:53:58.601248   58902 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 12:53:58.601281   58902 kubeadm.go:310] 	--control-plane 
	I1202 12:53:58.601298   58902 kubeadm.go:310] 
	I1202 12:53:58.601437   58902 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 12:53:58.601451   58902 kubeadm.go:310] 
	I1202 12:53:58.601570   58902 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kqbw67.r50dkuvxntafmbtm \
	I1202 12:53:58.601726   58902 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 12:53:58.601878   58902 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:53:58.602090   58902 cni.go:84] Creating CNI manager for ""
	I1202 12:53:58.602108   58902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:53:58.603597   58902 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 12:53:58.604832   58902 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 12:53:58.616597   58902 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 12:53:58.633585   58902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 12:53:58.633639   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:58.633694   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-953044 minikube.k8s.io/updated_at=2024_12_02T12_53_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=embed-certs-953044 minikube.k8s.io/primary=true
	I1202 12:53:58.843567   58902 ops.go:34] apiserver oom_adj: -16
	I1202 12:53:58.843643   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:56.252079   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:58.750596   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:59.344179   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:59.844667   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:00.343766   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:00.843808   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:01.343992   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:01.843750   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:02.344088   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:02.431425   58902 kubeadm.go:1113] duration metric: took 3.797838401s to wait for elevateKubeSystemPrivileges
	I1202 12:54:02.431466   58902 kubeadm.go:394] duration metric: took 4m53.907154853s to StartCluster
	I1202 12:54:02.431488   58902 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:54:02.431574   58902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:54:02.433388   58902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:54:02.433759   58902 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 12:54:02.433844   58902 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 12:54:02.433961   58902 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-953044"
	I1202 12:54:02.433979   58902 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-953044"
	I1202 12:54:02.433978   58902 config.go:182] Loaded profile config "embed-certs-953044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:54:02.433983   58902 addons.go:69] Setting default-storageclass=true in profile "embed-certs-953044"
	I1202 12:54:02.434009   58902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-953044"
	I1202 12:54:02.433983   58902 addons.go:69] Setting metrics-server=true in profile "embed-certs-953044"
	I1202 12:54:02.434082   58902 addons.go:234] Setting addon metrics-server=true in "embed-certs-953044"
	W1202 12:54:02.434090   58902 addons.go:243] addon metrics-server should already be in state true
	I1202 12:54:02.434121   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	W1202 12:54:02.433990   58902 addons.go:243] addon storage-provisioner should already be in state true
	I1202 12:54:02.434195   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	I1202 12:54:02.434500   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434544   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.434550   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434566   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434589   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.434606   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.435408   58902 out.go:177] * Verifying Kubernetes components...
	I1202 12:54:02.436893   58902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:54:02.450113   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I1202 12:54:02.450620   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.451022   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.451047   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.451376   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.451545   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.454345   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38059
	I1202 12:54:02.454346   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I1202 12:54:02.454788   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.454832   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.455251   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.455268   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.455281   58902 addons.go:234] Setting addon default-storageclass=true in "embed-certs-953044"
	W1202 12:54:02.455303   58902 addons.go:243] addon default-storageclass should already be in state true
	I1202 12:54:02.455336   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	I1202 12:54:02.455286   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.455377   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.455570   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.455696   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.455708   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.455739   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.456068   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.456085   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.456105   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.456122   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.470558   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I1202 12:54:02.470761   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I1202 12:54:02.470971   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471035   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43157
	I1202 12:54:02.471142   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471406   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.471426   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.471494   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471620   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.471633   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.471955   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472019   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.472035   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.472110   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.472127   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472446   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472647   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.472685   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.472721   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.474380   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.474597   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.476328   58902 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1202 12:54:02.476338   58902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:54:02.477992   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 12:54:02.478008   58902 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 12:54:02.478022   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.478549   58902 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 12:54:02.478567   58902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 12:54:02.478584   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.481364   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.481698   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.481725   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.481956   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.482008   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.482150   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.482274   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.482417   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.482503   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.482521   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.482785   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.483079   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.483352   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.483478   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.489285   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I1202 12:54:02.489644   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.490064   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.490085   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.490346   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.490510   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.491774   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.491961   58902 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 12:54:02.491974   58902 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 12:54:02.491990   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.494680   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.495069   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.495098   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.495259   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.495392   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.495582   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.495700   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.626584   58902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:54:02.650914   58902 node_ready.go:35] waiting up to 6m0s for node "embed-certs-953044" to be "Ready" ...
	I1202 12:54:02.658909   58902 node_ready.go:49] node "embed-certs-953044" has status "Ready":"True"
	I1202 12:54:02.658931   58902 node_ready.go:38] duration metric: took 7.986729ms for node "embed-certs-953044" to be "Ready" ...
	I1202 12:54:02.658939   58902 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:02.663878   58902 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:02.708572   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 12:54:02.711794   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 12:54:02.711813   58902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1202 12:54:02.729787   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 12:54:02.760573   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 12:54:02.760595   58902 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 12:54:02.814731   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 12:54:02.814756   58902 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 12:54:02.867045   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 12:54:03.549497   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.549532   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.549914   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.549970   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.549999   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.550010   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.550032   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.550256   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.550360   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.550336   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.551311   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.551333   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.551629   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.551591   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.551670   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.551686   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.551694   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.551907   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.552278   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.552295   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.577295   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.577322   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.577618   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.577631   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.577647   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.835721   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.835752   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.836073   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.836092   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.836108   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.836118   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.836460   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.836478   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.836489   58902 addons.go:475] Verifying addon metrics-server=true in "embed-certs-953044"
	I1202 12:54:03.836492   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.838858   58902 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1202 12:54:03.840263   58902 addons.go:510] duration metric: took 1.406440873s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1202 12:53:59.032460   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:02.100433   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:01.251084   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:03.252024   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:05.752273   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:04.669768   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:07.171770   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:08.180411   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:08.251624   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:10.751482   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:09.670413   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:10.669602   58902 pod_ready.go:93] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.669624   58902 pod_ready.go:82] duration metric: took 8.00571576s for pod "etcd-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.669634   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.674276   58902 pod_ready.go:93] pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.674293   58902 pod_ready.go:82] duration metric: took 4.652882ms for pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.674301   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.678330   58902 pod_ready.go:93] pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.678346   58902 pod_ready.go:82] duration metric: took 4.037883ms for pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.678354   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:12.184565   58902 pod_ready.go:93] pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:12.184591   58902 pod_ready.go:82] duration metric: took 1.506229118s for pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:12.184601   58902 pod_ready.go:39] duration metric: took 9.525652092s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:12.184622   58902 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:54:12.184683   58902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:54:12.204339   58902 api_server.go:72] duration metric: took 9.770541552s to wait for apiserver process to appear ...
	I1202 12:54:12.204361   58902 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:54:12.204383   58902 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8443/healthz ...
	I1202 12:54:12.208020   58902 api_server.go:279] https://192.168.72.203:8443/healthz returned 200:
	ok
	I1202 12:54:12.208957   58902 api_server.go:141] control plane version: v1.31.2
	I1202 12:54:12.208975   58902 api_server.go:131] duration metric: took 4.608337ms to wait for apiserver health ...
	I1202 12:54:12.208982   58902 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:54:12.215103   58902 system_pods.go:59] 9 kube-system pods found
	I1202 12:54:12.215123   58902 system_pods.go:61] "coredns-7c65d6cfc9-fwt6z" [06a23976-b261-4baa-8f66-e966addfb41a] Running
	I1202 12:54:12.215128   58902 system_pods.go:61] "coredns-7c65d6cfc9-tm4ct" [109d2f58-c2c8-4bf0-8232-fdbeb078305d] Running
	I1202 12:54:12.215132   58902 system_pods.go:61] "etcd-embed-certs-953044" [8f6fcd39-810b-45a4-91f0-9d449964beb1] Running
	I1202 12:54:12.215135   58902 system_pods.go:61] "kube-apiserver-embed-certs-953044" [976f7d34-0970-43b6-9b2a-18c2d7be0d63] Running
	I1202 12:54:12.215145   58902 system_pods.go:61] "kube-controller-manager-embed-certs-953044" [49c46bb7-8936-4c00-8764-56ae847aab27] Running
	I1202 12:54:12.215150   58902 system_pods.go:61] "kube-proxy-kg4z6" [c6b74e9c-47e4-4b1c-a219-685cc119219b] Running
	I1202 12:54:12.215157   58902 system_pods.go:61] "kube-scheduler-embed-certs-953044" [7940c2f7-a1b6-4c5b-879d-d72c3776ffbb] Running
	I1202 12:54:12.215171   58902 system_pods.go:61] "metrics-server-6867b74b74-fwhvq" [e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:12.215181   58902 system_pods.go:61] "storage-provisioner" [35fdd473-75b2-41d6-95bf-1bcab189dae5] Running
	I1202 12:54:12.215190   58902 system_pods.go:74] duration metric: took 6.203134ms to wait for pod list to return data ...
	I1202 12:54:12.215198   58902 default_sa.go:34] waiting for default service account to be created ...
	I1202 12:54:12.217406   58902 default_sa.go:45] found service account: "default"
	I1202 12:54:12.217421   58902 default_sa.go:55] duration metric: took 2.217536ms for default service account to be created ...
	I1202 12:54:12.217427   58902 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 12:54:12.221673   58902 system_pods.go:86] 9 kube-system pods found
	I1202 12:54:12.221690   58902 system_pods.go:89] "coredns-7c65d6cfc9-fwt6z" [06a23976-b261-4baa-8f66-e966addfb41a] Running
	I1202 12:54:12.221695   58902 system_pods.go:89] "coredns-7c65d6cfc9-tm4ct" [109d2f58-c2c8-4bf0-8232-fdbeb078305d] Running
	I1202 12:54:12.221701   58902 system_pods.go:89] "etcd-embed-certs-953044" [8f6fcd39-810b-45a4-91f0-9d449964beb1] Running
	I1202 12:54:12.221705   58902 system_pods.go:89] "kube-apiserver-embed-certs-953044" [976f7d34-0970-43b6-9b2a-18c2d7be0d63] Running
	I1202 12:54:12.221709   58902 system_pods.go:89] "kube-controller-manager-embed-certs-953044" [49c46bb7-8936-4c00-8764-56ae847aab27] Running
	I1202 12:54:12.221712   58902 system_pods.go:89] "kube-proxy-kg4z6" [c6b74e9c-47e4-4b1c-a219-685cc119219b] Running
	I1202 12:54:12.221716   58902 system_pods.go:89] "kube-scheduler-embed-certs-953044" [7940c2f7-a1b6-4c5b-879d-d72c3776ffbb] Running
	I1202 12:54:12.221724   58902 system_pods.go:89] "metrics-server-6867b74b74-fwhvq" [e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:12.221729   58902 system_pods.go:89] "storage-provisioner" [35fdd473-75b2-41d6-95bf-1bcab189dae5] Running
	I1202 12:54:12.221736   58902 system_pods.go:126] duration metric: took 4.304449ms to wait for k8s-apps to be running ...
	I1202 12:54:12.221745   58902 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 12:54:12.221780   58902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:54:12.238687   58902 system_svc.go:56] duration metric: took 16.934566ms WaitForService to wait for kubelet
	I1202 12:54:12.238707   58902 kubeadm.go:582] duration metric: took 9.804914519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:54:12.238722   58902 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:54:12.268746   58902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:54:12.268776   58902 node_conditions.go:123] node cpu capacity is 2
	I1202 12:54:12.268790   58902 node_conditions.go:105] duration metric: took 30.063656ms to run NodePressure ...
	I1202 12:54:12.268802   58902 start.go:241] waiting for startup goroutines ...
	I1202 12:54:12.268813   58902 start.go:246] waiting for cluster config update ...
	I1202 12:54:12.268828   58902 start.go:255] writing updated cluster config ...
	I1202 12:54:12.269149   58902 ssh_runner.go:195] Run: rm -f paused
	I1202 12:54:12.315523   58902 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 12:54:12.317559   58902 out.go:177] * Done! kubectl is now configured to use "embed-certs-953044" cluster and "default" namespace by default
	I1202 12:54:11.252465   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:13.251203   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:15.251601   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:17.332421   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:17.751347   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:20.252108   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:20.404508   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:21.252458   57877 pod_ready.go:82] duration metric: took 4m0.007570673s for pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace to be "Ready" ...
	E1202 12:54:21.252479   57877 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1202 12:54:21.252487   57877 pod_ready.go:39] duration metric: took 4m2.808635222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:21.252501   57877 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:54:21.252524   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:21.252565   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:21.311644   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:21.311663   57877 cri.go:89] found id: ""
	I1202 12:54:21.311670   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:21.311712   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.316826   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:21.316881   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:21.366930   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:21.366951   57877 cri.go:89] found id: ""
	I1202 12:54:21.366959   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:21.366999   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.371132   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:21.371194   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:21.405238   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:21.405261   57877 cri.go:89] found id: ""
	I1202 12:54:21.405270   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:21.405312   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.409631   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:21.409687   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:21.444516   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:21.444535   57877 cri.go:89] found id: ""
	I1202 12:54:21.444542   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:21.444583   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.448736   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:21.448796   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:21.485458   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:21.485484   57877 cri.go:89] found id: ""
	I1202 12:54:21.485494   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:21.485546   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.489882   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:21.489953   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:21.525951   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:21.525971   57877 cri.go:89] found id: ""
	I1202 12:54:21.525978   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:21.526028   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.530141   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:21.530186   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:21.564886   57877 cri.go:89] found id: ""
	I1202 12:54:21.564909   57877 logs.go:282] 0 containers: []
	W1202 12:54:21.564920   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:21.564928   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:21.564981   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:21.601560   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:21.601585   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:21.601593   57877 cri.go:89] found id: ""
	I1202 12:54:21.601603   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:21.601660   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.605710   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.609870   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:21.609892   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:21.645558   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:21.645581   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:21.680733   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:21.680764   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:21.731429   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:21.731452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:21.764658   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:21.764680   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:22.249475   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:22.249511   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:22.305127   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:22.305162   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:22.369496   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:22.369528   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:22.384486   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:22.384510   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:22.425402   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:22.425424   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:22.463801   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:22.463828   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:22.507022   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:22.507048   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:22.638422   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:22.638452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:25.190880   57877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:54:25.206797   57877 api_server.go:72] duration metric: took 4m14.027370187s to wait for apiserver process to appear ...
	I1202 12:54:25.206823   57877 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:54:25.206866   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:25.206924   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:25.241643   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:25.241669   57877 cri.go:89] found id: ""
	I1202 12:54:25.241680   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:25.241734   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.245997   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:25.246037   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:25.290955   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:25.290973   57877 cri.go:89] found id: ""
	I1202 12:54:25.290980   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:25.291029   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.295284   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:25.295329   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:25.333254   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:25.333275   57877 cri.go:89] found id: ""
	I1202 12:54:25.333284   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:25.333328   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.337649   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:25.337698   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:25.371662   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:25.371682   57877 cri.go:89] found id: ""
	I1202 12:54:25.371691   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:25.371739   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.376026   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:25.376075   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:25.411223   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:25.411238   57877 cri.go:89] found id: ""
	I1202 12:54:25.411245   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:25.411287   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.415307   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:25.415351   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:25.451008   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:25.451027   57877 cri.go:89] found id: ""
	I1202 12:54:25.451035   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:25.451089   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.455681   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:25.455727   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:25.499293   57877 cri.go:89] found id: ""
	I1202 12:54:25.499315   57877 logs.go:282] 0 containers: []
	W1202 12:54:25.499325   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:25.499332   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:25.499377   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:25.533874   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:25.533896   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:25.533903   57877 cri.go:89] found id: ""
	I1202 12:54:25.533912   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:25.533961   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.537993   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.541881   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:25.541899   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:25.645488   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:25.645512   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:25.683783   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:25.683807   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:26.120334   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:26.120367   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:26.484425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:26.190493   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:26.190521   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:26.235397   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:26.235421   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:26.285411   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:26.285452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:26.331807   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:26.331836   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:26.374437   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:26.374461   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:26.436459   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:26.436487   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:26.472126   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:26.472162   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:26.504819   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:26.504840   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:26.518789   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:26.518821   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:29.069521   57877 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I1202 12:54:29.074072   57877 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I1202 12:54:29.075022   57877 api_server.go:141] control plane version: v1.31.2
	I1202 12:54:29.075041   57877 api_server.go:131] duration metric: took 3.868210222s to wait for apiserver health ...
	I1202 12:54:29.075048   57877 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:54:29.075069   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:29.075112   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:29.110715   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:29.110735   57877 cri.go:89] found id: ""
	I1202 12:54:29.110742   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:29.110790   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.114994   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:29.115040   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:29.150431   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:29.150459   57877 cri.go:89] found id: ""
	I1202 12:54:29.150468   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:29.150525   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.154909   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:29.154967   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:29.198139   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:29.198162   57877 cri.go:89] found id: ""
	I1202 12:54:29.198172   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:29.198224   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.202969   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:29.203031   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:29.243771   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:29.243795   57877 cri.go:89] found id: ""
	I1202 12:54:29.243802   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:29.243843   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.248039   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:29.248106   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:29.286473   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:29.286492   57877 cri.go:89] found id: ""
	I1202 12:54:29.286498   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:29.286538   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.290543   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:29.290590   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:29.327899   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:29.327916   57877 cri.go:89] found id: ""
	I1202 12:54:29.327922   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:29.327961   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.332516   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:29.332571   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:29.368204   57877 cri.go:89] found id: ""
	I1202 12:54:29.368236   57877 logs.go:282] 0 containers: []
	W1202 12:54:29.368247   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:29.368255   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:29.368301   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:29.407333   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:29.407358   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:29.407364   57877 cri.go:89] found id: ""
	I1202 12:54:29.407372   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:29.407425   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.412153   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.416525   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:29.416548   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:29.457360   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:29.457394   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:29.495662   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:29.495691   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:29.549304   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:29.549331   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:29.585693   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:29.585718   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:29.621888   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:29.621912   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:29.670118   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:29.670153   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:29.685833   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:29.685855   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:29.792525   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:29.792555   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:29.837090   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:29.837138   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:29.872862   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:29.872893   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:30.228483   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:30.228523   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:30.298252   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:30.298285   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:32.851536   57877 system_pods.go:59] 8 kube-system pods found
	I1202 12:54:32.851562   57877 system_pods.go:61] "coredns-7c65d6cfc9-cvfc9" [f88088d1-7d48-498a-8251-f3a9ff436583] Running
	I1202 12:54:32.851567   57877 system_pods.go:61] "etcd-no-preload-658679" [950bf61a-2e04-43f9-805b-9cb98708d604] Running
	I1202 12:54:32.851571   57877 system_pods.go:61] "kube-apiserver-no-preload-658679" [3d4bfb53-da01-4973-bb3e-263a6d68463c] Running
	I1202 12:54:32.851574   57877 system_pods.go:61] "kube-controller-manager-no-preload-658679" [17db9644-06c4-4471-9186-e4de3efc7575] Running
	I1202 12:54:32.851577   57877 system_pods.go:61] "kube-proxy-2xf6j" [477778b7-12f0-4055-a583-edbf84c1a635] Running
	I1202 12:54:32.851580   57877 system_pods.go:61] "kube-scheduler-no-preload-658679" [617f769c-0ae5-4942-80c5-f85fafbc389e] Running
	I1202 12:54:32.851586   57877 system_pods.go:61] "metrics-server-6867b74b74-sn7tq" [8171d626-7036-4585-a967-8ff54f00cfc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:32.851590   57877 system_pods.go:61] "storage-provisioner" [5736f43f-6d15-41de-856a-048887f08742] Running
	I1202 12:54:32.851597   57877 system_pods.go:74] duration metric: took 3.776542886s to wait for pod list to return data ...
	I1202 12:54:32.851604   57877 default_sa.go:34] waiting for default service account to be created ...
	I1202 12:54:32.853911   57877 default_sa.go:45] found service account: "default"
	I1202 12:54:32.853928   57877 default_sa.go:55] duration metric: took 2.318516ms for default service account to be created ...
	I1202 12:54:32.853935   57877 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 12:54:32.858485   57877 system_pods.go:86] 8 kube-system pods found
	I1202 12:54:32.858508   57877 system_pods.go:89] "coredns-7c65d6cfc9-cvfc9" [f88088d1-7d48-498a-8251-f3a9ff436583] Running
	I1202 12:54:32.858513   57877 system_pods.go:89] "etcd-no-preload-658679" [950bf61a-2e04-43f9-805b-9cb98708d604] Running
	I1202 12:54:32.858519   57877 system_pods.go:89] "kube-apiserver-no-preload-658679" [3d4bfb53-da01-4973-bb3e-263a6d68463c] Running
	I1202 12:54:32.858523   57877 system_pods.go:89] "kube-controller-manager-no-preload-658679" [17db9644-06c4-4471-9186-e4de3efc7575] Running
	I1202 12:54:32.858526   57877 system_pods.go:89] "kube-proxy-2xf6j" [477778b7-12f0-4055-a583-edbf84c1a635] Running
	I1202 12:54:32.858530   57877 system_pods.go:89] "kube-scheduler-no-preload-658679" [617f769c-0ae5-4942-80c5-f85fafbc389e] Running
	I1202 12:54:32.858536   57877 system_pods.go:89] "metrics-server-6867b74b74-sn7tq" [8171d626-7036-4585-a967-8ff54f00cfc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:32.858540   57877 system_pods.go:89] "storage-provisioner" [5736f43f-6d15-41de-856a-048887f08742] Running
	I1202 12:54:32.858547   57877 system_pods.go:126] duration metric: took 4.607096ms to wait for k8s-apps to be running ...
	I1202 12:54:32.858555   57877 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 12:54:32.858592   57877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:54:32.874267   57877 system_svc.go:56] duration metric: took 15.704013ms WaitForService to wait for kubelet
	I1202 12:54:32.874293   57877 kubeadm.go:582] duration metric: took 4m21.694870267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:54:32.874311   57877 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:54:32.877737   57877 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:54:32.877757   57877 node_conditions.go:123] node cpu capacity is 2
	I1202 12:54:32.877768   57877 node_conditions.go:105] duration metric: took 3.452076ms to run NodePressure ...
	I1202 12:54:32.877782   57877 start.go:241] waiting for startup goroutines ...
	I1202 12:54:32.877791   57877 start.go:246] waiting for cluster config update ...
	I1202 12:54:32.877807   57877 start.go:255] writing updated cluster config ...
	I1202 12:54:32.878129   57877 ssh_runner.go:195] Run: rm -f paused
	I1202 12:54:32.926190   57877 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 12:54:32.927894   57877 out.go:177] * Done! kubectl is now configured to use "no-preload-658679" cluster and "default" namespace by default
	I1202 12:54:29.556420   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:35.636450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:38.708454   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:44.788462   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:47.860484   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:53.940448   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:57.012536   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:03.092433   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:06.164483   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:12.244464   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:15.316647   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:21.396479   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:24.468584   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:32.968600   59162 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:55:32.968731   59162 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:55:32.970229   59162 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:55:32.970291   59162 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:55:32.970394   59162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:55:32.970513   59162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:55:32.970629   59162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:55:32.970717   59162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:55:32.972396   59162 out.go:235]   - Generating certificates and keys ...
	I1202 12:55:32.972491   59162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:55:32.972577   59162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:55:32.972734   59162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:55:32.972823   59162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:55:32.972926   59162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:55:32.973006   59162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:55:32.973108   59162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:55:32.973192   59162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:55:32.973318   59162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:55:32.973429   59162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:55:32.973501   59162 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:55:32.973594   59162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:55:32.973658   59162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:55:32.973722   59162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:55:32.973819   59162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:55:32.973903   59162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:55:32.974041   59162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:55:32.974157   59162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:55:32.974206   59162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:55:32.974301   59162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:55:32.976508   59162 out.go:235]   - Booting up control plane ...
	I1202 12:55:32.976620   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:55:32.976741   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:55:32.976842   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:55:32.976957   59162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:55:32.977191   59162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:55:32.977281   59162 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:55:32.977342   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.977505   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.977579   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.977795   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.977906   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978091   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978174   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978394   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978497   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978743   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978756   59162 kubeadm.go:310] 
	I1202 12:55:32.978801   59162 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:55:32.978859   59162 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:55:32.978868   59162 kubeadm.go:310] 
	I1202 12:55:32.978914   59162 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:55:32.978961   59162 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:55:32.979078   59162 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:55:32.979088   59162 kubeadm.go:310] 
	I1202 12:55:32.979230   59162 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:55:32.979279   59162 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:55:32.979337   59162 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:55:32.979346   59162 kubeadm.go:310] 
	I1202 12:55:32.979484   59162 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:55:32.979580   59162 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:55:32.979593   59162 kubeadm.go:310] 
	I1202 12:55:32.979721   59162 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:55:32.979848   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:55:32.979968   59162 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:55:32.980059   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:55:32.980127   59162 kubeadm.go:310] 
	W1202 12:55:32.980202   59162 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 12:55:32.980267   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:55:33.452325   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:55:33.467527   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:55:33.477494   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:55:33.477522   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:55:33.477575   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:55:33.487333   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:55:33.487395   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:55:33.497063   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:55:33.506552   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:55:33.506605   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:55:33.515968   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:55:33.524922   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:55:33.524956   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:55:33.534339   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:55:33.543370   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:55:33.543403   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:55:33.552970   59162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:55:33.624833   59162 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:55:33.624990   59162 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:55:33.767688   59162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:55:33.767796   59162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:55:33.767909   59162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:55:33.935314   59162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:55:30.548478   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:33.624512   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:33.937193   59162 out.go:235]   - Generating certificates and keys ...
	I1202 12:55:33.937290   59162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:55:33.937402   59162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:55:33.937513   59162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:55:33.937620   59162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:55:33.937722   59162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:55:33.937791   59162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:55:33.937845   59162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:55:33.937896   59162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:55:33.937964   59162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:55:33.938028   59162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:55:33.938061   59162 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:55:33.938108   59162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:55:34.167163   59162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:55:35.008947   59162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:55:35.304057   59162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:55:35.385824   59162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:55:35.409687   59162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:55:35.413131   59162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:55:35.413218   59162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:55:35.569508   59162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:55:35.571455   59162 out.go:235]   - Booting up control plane ...
	I1202 12:55:35.571596   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:55:35.578476   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:55:35.579686   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:55:35.580586   59162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:55:35.582869   59162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:55:39.700423   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:42.772498   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:48.852452   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:51.924490   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:58.004488   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:01.076456   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:07.160425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:10.228467   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:15.585409   59162 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:56:15.585530   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:15.585792   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:16.308453   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:20.586011   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:20.586257   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:19.380488   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:25.460451   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:28.532425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:30.586783   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:30.587053   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:31.533399   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:56:31.533454   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:31.533725   61173 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653783"
	I1202 12:56:31.533749   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:31.533914   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:31.535344   61173 machine.go:96] duration metric: took 4m37.429393672s to provisionDockerMachine
	I1202 12:56:31.535386   61173 fix.go:56] duration metric: took 4m37.448634942s for fixHost
	I1202 12:56:31.535394   61173 start.go:83] releasing machines lock for "default-k8s-diff-port-653783", held for 4m37.448659715s
	W1202 12:56:31.535408   61173 start.go:714] error starting host: provision: host is not running
	W1202 12:56:31.535498   61173 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1202 12:56:31.535507   61173 start.go:729] Will try again in 5 seconds ...
	I1202 12:56:36.536323   61173 start.go:360] acquireMachinesLock for default-k8s-diff-port-653783: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:56:36.536434   61173 start.go:364] duration metric: took 71.395µs to acquireMachinesLock for "default-k8s-diff-port-653783"
	I1202 12:56:36.536463   61173 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:56:36.536471   61173 fix.go:54] fixHost starting: 
	I1202 12:56:36.536763   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:56:36.536790   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:56:36.551482   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38745
	I1202 12:56:36.551962   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:56:36.552383   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:56:36.552405   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:56:36.552689   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:56:36.552849   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:36.552968   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 12:56:36.554481   61173 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653783: state=Stopped err=<nil>
	I1202 12:56:36.554501   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	W1202 12:56:36.554652   61173 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:56:36.556508   61173 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653783" ...
	I1202 12:56:36.557534   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Start
	I1202 12:56:36.557690   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring networks are active...
	I1202 12:56:36.558371   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring network default is active
	I1202 12:56:36.558713   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring network mk-default-k8s-diff-port-653783 is active
	I1202 12:56:36.559023   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Getting domain xml...
	I1202 12:56:36.559739   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Creating domain...
	I1202 12:56:37.799440   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting to get IP...
	I1202 12:56:37.800397   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.800875   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.800918   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:37.800836   62278 retry.go:31] will retry after 192.811495ms: waiting for machine to come up
	I1202 12:56:37.995285   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.995743   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.995771   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:37.995697   62278 retry.go:31] will retry after 367.440749ms: waiting for machine to come up
	I1202 12:56:38.365229   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.365781   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.365810   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:38.365731   62278 retry.go:31] will retry after 350.196014ms: waiting for machine to come up
	I1202 12:56:38.717121   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.717650   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.717681   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:38.717590   62278 retry.go:31] will retry after 557.454725ms: waiting for machine to come up
	I1202 12:56:39.276110   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:39.276602   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:39.276631   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:39.276536   62278 retry.go:31] will retry after 735.275509ms: waiting for machine to come up
	I1202 12:56:40.013307   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.013865   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.013888   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:40.013833   62278 retry.go:31] will retry after 613.45623ms: waiting for machine to come up
	I1202 12:56:40.629220   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.629731   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.629776   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:40.629678   62278 retry.go:31] will retry after 748.849722ms: waiting for machine to come up
	I1202 12:56:41.380615   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:41.381052   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:41.381075   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:41.381023   62278 retry.go:31] will retry after 1.342160202s: waiting for machine to come up
	I1202 12:56:42.724822   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:42.725315   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:42.725355   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:42.725251   62278 retry.go:31] will retry after 1.693072543s: waiting for machine to come up
	I1202 12:56:44.420249   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:44.420700   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:44.420721   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:44.420658   62278 retry.go:31] will retry after 2.210991529s: waiting for machine to come up
	I1202 12:56:46.633486   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:46.633847   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:46.633875   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:46.633807   62278 retry.go:31] will retry after 2.622646998s: waiting for machine to come up
	I1202 12:56:50.587516   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:50.587731   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:49.257705   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:49.258232   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:49.258260   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:49.258186   62278 retry.go:31] will retry after 2.375973874s: waiting for machine to come up
	I1202 12:56:51.636055   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:51.636422   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:51.636450   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:51.636379   62278 retry.go:31] will retry after 3.118442508s: waiting for machine to come up
	I1202 12:56:54.757260   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.757665   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Found IP for machine: 192.168.39.154
	I1202 12:56:54.757689   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has current primary IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.757697   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Reserving static IP address...
	I1202 12:56:54.758088   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653783", mac: "52:54:00:00:f6:f0", ip: "192.168.39.154"} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.758108   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Reserved static IP address: 192.168.39.154
	I1202 12:56:54.758120   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | skip adding static IP to network mk-default-k8s-diff-port-653783 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653783", mac: "52:54:00:00:f6:f0", ip: "192.168.39.154"}
	I1202 12:56:54.758134   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Getting to WaitForSSH function...
	I1202 12:56:54.758142   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for SSH to be available...
	I1202 12:56:54.760333   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.760643   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.760672   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.760789   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Using SSH client type: external
	I1202 12:56:54.760812   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa (-rw-------)
	I1202 12:56:54.760855   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 12:56:54.760880   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | About to run SSH command:
	I1202 12:56:54.760892   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | exit 0
	I1202 12:56:54.884099   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | SSH cmd err, output: <nil>: 
	I1202 12:56:54.884435   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetConfigRaw
	I1202 12:56:54.885058   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:54.887519   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.887823   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.887854   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.888041   61173 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/config.json ...
	I1202 12:56:54.888333   61173 machine.go:93] provisionDockerMachine start ...
	I1202 12:56:54.888352   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:54.888564   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:54.890754   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.891062   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.891090   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.891254   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:54.891423   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:54.891560   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:54.891709   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:54.891851   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:54.892053   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:54.892070   61173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:56:54.996722   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1202 12:56:54.996751   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:54.996974   61173 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653783"
	I1202 12:56:54.997004   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:54.997202   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.000026   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.000425   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.000453   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.000624   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.000810   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.000978   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.001122   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.001308   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.001540   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.001562   61173 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653783 && echo "default-k8s-diff-port-653783" | sudo tee /etc/hostname
	I1202 12:56:55.122933   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653783
	
	I1202 12:56:55.122965   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.125788   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.126182   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.126219   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.126406   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.126555   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.126718   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.126834   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.126973   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.127180   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.127206   61173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653783/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 12:56:55.242263   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:56:55.242291   61173 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 12:56:55.242331   61173 buildroot.go:174] setting up certificates
	I1202 12:56:55.242340   61173 provision.go:84] configureAuth start
	I1202 12:56:55.242350   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:55.242604   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:55.245340   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.245685   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.245719   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.245882   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.248090   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.248481   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.248512   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.248659   61173 provision.go:143] copyHostCerts
	I1202 12:56:55.248718   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 12:56:55.248733   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:56:55.248810   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 12:56:55.248920   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 12:56:55.248931   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:56:55.248965   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 12:56:55.249039   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 12:56:55.249049   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:56:55.249081   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 12:56:55.249152   61173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653783 san=[127.0.0.1 192.168.39.154 default-k8s-diff-port-653783 localhost minikube]
	I1202 12:56:55.688887   61173 provision.go:177] copyRemoteCerts
	I1202 12:56:55.688948   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 12:56:55.688976   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.691486   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.691865   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.691896   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.692056   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.692239   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.692403   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.692524   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:55.777670   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 12:56:55.802466   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1202 12:56:55.826639   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 12:56:55.850536   61173 provision.go:87] duration metric: took 608.183552ms to configureAuth
	I1202 12:56:55.850560   61173 buildroot.go:189] setting minikube options for container-runtime
	I1202 12:56:55.850731   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:56:55.850813   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.853607   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.853991   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.854024   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.854122   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.854294   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.854436   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.854598   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.854734   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.854883   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.854899   61173 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 12:56:56.083902   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 12:56:56.083931   61173 machine.go:96] duration metric: took 1.195584241s to provisionDockerMachine
	I1202 12:56:56.083944   61173 start.go:293] postStartSetup for "default-k8s-diff-port-653783" (driver="kvm2")
	I1202 12:56:56.083957   61173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 12:56:56.083974   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.084276   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 12:56:56.084307   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.087400   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.087727   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.087750   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.087909   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.088088   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.088272   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.088448   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.170612   61173 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 12:56:56.175344   61173 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 12:56:56.175366   61173 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 12:56:56.175454   61173 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 12:56:56.175529   61173 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 12:56:56.175610   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 12:56:56.185033   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:56:56.209569   61173 start.go:296] duration metric: took 125.611321ms for postStartSetup
	I1202 12:56:56.209605   61173 fix.go:56] duration metric: took 19.673134089s for fixHost
	I1202 12:56:56.209623   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.212600   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.212883   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.212923   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.213137   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.213395   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.213575   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.213708   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.213854   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:56.214014   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:56.214032   61173 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 12:56:56.320723   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733144216.287359296
	
	I1202 12:56:56.320744   61173 fix.go:216] guest clock: 1733144216.287359296
	I1202 12:56:56.320753   61173 fix.go:229] Guest: 2024-12-02 12:56:56.287359296 +0000 UTC Remote: 2024-12-02 12:56:56.209609687 +0000 UTC m=+302.261021771 (delta=77.749609ms)
	I1202 12:56:56.320776   61173 fix.go:200] guest clock delta is within tolerance: 77.749609ms
	I1202 12:56:56.320781   61173 start.go:83] releasing machines lock for "default-k8s-diff-port-653783", held for 19.784333398s
	I1202 12:56:56.320797   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.321011   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:56.323778   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.324117   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.324136   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.324289   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324759   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324921   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324984   61173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 12:56:56.325034   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.325138   61173 ssh_runner.go:195] Run: cat /version.json
	I1202 12:56:56.325164   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.327744   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328000   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328058   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.328083   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328262   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.328373   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.328411   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328411   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.328584   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.328584   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.328774   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.328769   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.328908   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.329007   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.405370   61173 ssh_runner.go:195] Run: systemctl --version
	I1202 12:56:56.427743   61173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 12:56:56.574416   61173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 12:56:56.580858   61173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 12:56:56.580948   61173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 12:56:56.597406   61173 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 12:56:56.597427   61173 start.go:495] detecting cgroup driver to use...
	I1202 12:56:56.597472   61173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 12:56:56.612456   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 12:56:56.625811   61173 docker.go:217] disabling cri-docker service (if available) ...
	I1202 12:56:56.625847   61173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 12:56:56.642677   61173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 12:56:56.657471   61173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 12:56:56.776273   61173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 12:56:56.949746   61173 docker.go:233] disabling docker service ...
	I1202 12:56:56.949807   61173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 12:56:56.964275   61173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 12:56:56.977461   61173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 12:56:57.091134   61173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 12:56:57.209421   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 12:56:57.223153   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 12:56:57.241869   61173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 12:56:57.241933   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.252117   61173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 12:56:57.252174   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.262799   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.275039   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.285987   61173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 12:56:57.296968   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.307242   61173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.324555   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.335395   61173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 12:56:57.344411   61173 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 12:56:57.344450   61173 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 12:56:57.357400   61173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 12:56:57.366269   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:56:57.486764   61173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 12:56:57.574406   61173 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 12:56:57.574464   61173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 12:56:57.579268   61173 start.go:563] Will wait 60s for crictl version
	I1202 12:56:57.579328   61173 ssh_runner.go:195] Run: which crictl
	I1202 12:56:57.583110   61173 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 12:56:57.621921   61173 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 12:56:57.622003   61173 ssh_runner.go:195] Run: crio --version
	I1202 12:56:57.650543   61173 ssh_runner.go:195] Run: crio --version
	I1202 12:56:57.683842   61173 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 12:56:57.684861   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:57.687188   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:57.687459   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:57.687505   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:57.687636   61173 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 12:56:57.691723   61173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:56:57.704869   61173 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 12:56:57.704999   61173 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:56:57.705054   61173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:56:57.738780   61173 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 12:56:57.738828   61173 ssh_runner.go:195] Run: which lz4
	I1202 12:56:57.743509   61173 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 12:56:57.747763   61173 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 12:56:57.747784   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 12:56:59.105988   61173 crio.go:462] duration metric: took 1.362506994s to copy over tarball
	I1202 12:56:59.106062   61173 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 12:57:01.191007   61173 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.084920502s)
	I1202 12:57:01.191031   61173 crio.go:469] duration metric: took 2.085014298s to extract the tarball
	I1202 12:57:01.191038   61173 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 12:57:01.229238   61173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:57:01.272133   61173 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 12:57:01.272156   61173 cache_images.go:84] Images are preloaded, skipping loading
	I1202 12:57:01.272164   61173 kubeadm.go:934] updating node { 192.168.39.154 8444 v1.31.2 crio true true} ...
	I1202 12:57:01.272272   61173 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 12:57:01.272330   61173 ssh_runner.go:195] Run: crio config
	I1202 12:57:01.318930   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:57:01.318957   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:57:01.318968   61173 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 12:57:01.318994   61173 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653783 NodeName:default-k8s-diff-port-653783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 12:57:01.319125   61173 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.154"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 12:57:01.319184   61173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 12:57:01.330162   61173 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 12:57:01.330226   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 12:57:01.340217   61173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 12:57:01.356786   61173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 12:57:01.373210   61173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1202 12:57:01.390184   61173 ssh_runner.go:195] Run: grep 192.168.39.154	control-plane.minikube.internal$ /etc/hosts
	I1202 12:57:01.394099   61173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:57:01.406339   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:57:01.526518   61173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:57:01.543879   61173 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783 for IP: 192.168.39.154
	I1202 12:57:01.543899   61173 certs.go:194] generating shared ca certs ...
	I1202 12:57:01.543920   61173 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:57:01.544070   61173 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 12:57:01.544134   61173 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 12:57:01.544147   61173 certs.go:256] generating profile certs ...
	I1202 12:57:01.544285   61173 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/client.key
	I1202 12:57:01.544377   61173 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.key.44fa7240
	I1202 12:57:01.544429   61173 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.key
	I1202 12:57:01.544579   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 12:57:01.544608   61173 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 12:57:01.544617   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 12:57:01.544636   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 12:57:01.544659   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 12:57:01.544688   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 12:57:01.544727   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:57:01.545381   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 12:57:01.580933   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 12:57:01.621199   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 12:57:01.648996   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 12:57:01.681428   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 12:57:01.710907   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 12:57:01.741414   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 12:57:01.766158   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 12:57:01.789460   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 12:57:01.812569   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 12:57:01.836007   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 12:57:01.858137   61173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 12:57:01.874315   61173 ssh_runner.go:195] Run: openssl version
	I1202 12:57:01.880190   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 12:57:01.893051   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.898250   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.898306   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.904207   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 12:57:01.915975   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 12:57:01.927977   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.932436   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.932478   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.938049   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 12:57:01.948744   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 12:57:01.959472   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.963806   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.963839   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.969412   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 12:57:01.980743   61173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:57:01.986211   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 12:57:01.992717   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 12:57:01.998781   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 12:57:02.004934   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 12:57:02.010903   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 12:57:02.016677   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 12:57:02.022595   61173 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:57:02.022680   61173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 12:57:02.022711   61173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:57:02.060425   61173 cri.go:89] found id: ""
	I1202 12:57:02.060497   61173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 12:57:02.070807   61173 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1202 12:57:02.070827   61173 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1202 12:57:02.070868   61173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 12:57:02.081036   61173 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 12:57:02.082088   61173 kubeconfig.go:125] found "default-k8s-diff-port-653783" server: "https://192.168.39.154:8444"
	I1202 12:57:02.084179   61173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 12:57:02.094381   61173 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.154
	I1202 12:57:02.094429   61173 kubeadm.go:1160] stopping kube-system containers ...
	I1202 12:57:02.094441   61173 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 12:57:02.094485   61173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:57:02.129098   61173 cri.go:89] found id: ""
	I1202 12:57:02.129152   61173 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 12:57:02.146731   61173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:57:02.156860   61173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:57:02.156881   61173 kubeadm.go:157] found existing configuration files:
	
	I1202 12:57:02.156924   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1202 12:57:02.166273   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:57:02.166322   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:57:02.175793   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1202 12:57:02.184665   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:57:02.184707   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:57:02.194243   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1202 12:57:02.203173   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:57:02.203217   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:57:02.212563   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1202 12:57:02.221640   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:57:02.221682   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:57:02.230764   61173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:57:02.241691   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:02.353099   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.283720   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.487082   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.564623   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.644136   61173 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:57:03.644219   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:04.144882   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:04.644873   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.144778   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.645022   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.662892   61173 api_server.go:72] duration metric: took 2.01875734s to wait for apiserver process to appear ...
	I1202 12:57:05.662920   61173 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:57:05.662943   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.328451   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 12:57:08.328479   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 12:57:08.328492   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.368504   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 12:57:08.368547   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 12:57:08.664065   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.681253   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:57:08.681319   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:57:09.163310   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:09.169674   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:57:09.169699   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:57:09.663220   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:09.667397   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 200:
	ok
	I1202 12:57:09.675558   61173 api_server.go:141] control plane version: v1.31.2
	I1202 12:57:09.675582   61173 api_server.go:131] duration metric: took 4.012653559s to wait for apiserver health ...
	I1202 12:57:09.675592   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:57:09.675601   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:57:09.677275   61173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 12:57:09.678527   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 12:57:09.690640   61173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 12:57:09.708185   61173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:57:09.724719   61173 system_pods.go:59] 8 kube-system pods found
	I1202 12:57:09.724747   61173 system_pods.go:61] "coredns-7c65d6cfc9-7g74d" [a35c0ad2-6c02-4e14-afe5-887b3b5fd70f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 12:57:09.724755   61173 system_pods.go:61] "etcd-default-k8s-diff-port-653783" [25bc45db-481f-4c88-853b-105a32e1e8e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 12:57:09.724763   61173 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653783" [af0f2123-8eac-4f90-bc06-1fc1cb10deda] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 12:57:09.724769   61173 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653783" [c18b1705-438b-4954-941e-cfe5a3a0f6fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 12:57:09.724777   61173 system_pods.go:61] "kube-proxy-5t9gh" [35d08e89-5ad8-4fcb-9bff-5c12bc1fb497] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 12:57:09.724782   61173 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653783" [0db501e4-36fb-4a67-b11d-d6d9f3fa1383] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 12:57:09.724789   61173 system_pods.go:61] "metrics-server-6867b74b74-9v79b" [418c7615-5d41-4a24-b497-674f55573a0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:57:09.724794   61173 system_pods.go:61] "storage-provisioner" [dab6b0c7-8e10-435f-a57c-76044eaa11c0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 12:57:09.724799   61173 system_pods.go:74] duration metric: took 16.592713ms to wait for pod list to return data ...
	I1202 12:57:09.724808   61173 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:57:09.731235   61173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:57:09.731260   61173 node_conditions.go:123] node cpu capacity is 2
	I1202 12:57:09.731274   61173 node_conditions.go:105] duration metric: took 6.4605ms to run NodePressure ...
	I1202 12:57:09.731293   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:10.021346   61173 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1202 12:57:10.025152   61173 kubeadm.go:739] kubelet initialised
	I1202 12:57:10.025171   61173 kubeadm.go:740] duration metric: took 3.798597ms waiting for restarted kubelet to initialise ...
	I1202 12:57:10.025178   61173 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:57:10.029834   61173 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.033699   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.033718   61173 pod_ready.go:82] duration metric: took 3.86169ms for pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.033726   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.033731   61173 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.037291   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.037308   61173 pod_ready.go:82] duration metric: took 3.569468ms for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.037317   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.037322   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.041016   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.041035   61173 pod_ready.go:82] duration metric: took 3.705222ms for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.041046   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.041071   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:12.047581   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:14.048663   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:16.547831   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:19.047816   61173 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:19.047839   61173 pod_ready.go:82] duration metric: took 9.006753973s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.047850   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5t9gh" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.052277   61173 pod_ready.go:93] pod "kube-proxy-5t9gh" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:19.052296   61173 pod_ready.go:82] duration metric: took 4.440131ms for pod "kube-proxy-5t9gh" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.052305   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:21.058989   61173 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:22.558501   61173 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:22.558524   61173 pod_ready.go:82] duration metric: took 3.506212984s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:22.558533   61173 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:24.564668   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:27.064209   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:30.586451   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:57:30.586705   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:57:30.586735   59162 kubeadm.go:310] 
	I1202 12:57:30.586786   59162 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:57:30.586842   59162 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:57:30.586859   59162 kubeadm.go:310] 
	I1202 12:57:30.586924   59162 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:57:30.586990   59162 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:57:30.587140   59162 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:57:30.587152   59162 kubeadm.go:310] 
	I1202 12:57:30.587292   59162 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:57:30.587347   59162 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:57:30.587387   59162 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:57:30.587405   59162 kubeadm.go:310] 
	I1202 12:57:30.587557   59162 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:57:30.587642   59162 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:57:30.587655   59162 kubeadm.go:310] 
	I1202 12:57:30.587751   59162 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:57:30.587841   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:57:30.587923   59162 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:57:30.588029   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:57:30.588043   59162 kubeadm.go:310] 
	I1202 12:57:30.588959   59162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:57:30.589087   59162 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:57:30.589211   59162 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:57:30.589277   59162 kubeadm.go:394] duration metric: took 7m57.557592718s to StartCluster
	I1202 12:57:30.589312   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:57:30.589358   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:57:30.634368   59162 cri.go:89] found id: ""
	I1202 12:57:30.634402   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.634414   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:57:30.634423   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:57:30.634489   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:57:30.669582   59162 cri.go:89] found id: ""
	I1202 12:57:30.669605   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.669617   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:57:30.669625   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:57:30.669679   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:57:30.707779   59162 cri.go:89] found id: ""
	I1202 12:57:30.707805   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.707815   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:57:30.707823   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:57:30.707878   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:57:30.745724   59162 cri.go:89] found id: ""
	I1202 12:57:30.745751   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.745761   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:57:30.745768   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:57:30.745816   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:57:30.782946   59162 cri.go:89] found id: ""
	I1202 12:57:30.782969   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.782980   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:57:30.782987   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:57:30.783040   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:57:30.821743   59162 cri.go:89] found id: ""
	I1202 12:57:30.821776   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.821787   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:57:30.821795   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:57:30.821843   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:57:30.859754   59162 cri.go:89] found id: ""
	I1202 12:57:30.859783   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.859793   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:57:30.859801   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:57:30.859876   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:57:30.893632   59162 cri.go:89] found id: ""
	I1202 12:57:30.893660   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.893668   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:57:30.893677   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:57:30.893690   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:57:30.946387   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:57:30.946413   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:57:30.960540   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:57:30.960565   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:57:31.038246   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:57:31.038267   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:57:31.038279   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:57:31.155549   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:57:31.155584   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 12:57:31.221709   59162 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1202 12:57:31.221773   59162 out.go:270] * 
	W1202 12:57:31.221846   59162 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:57:31.221868   59162 out.go:270] * 
	W1202 12:57:31.222987   59162 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 12:57:31.226661   59162 out.go:201] 
	W1202 12:57:31.227691   59162 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:57:31.227739   59162 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 12:57:31.227763   59162 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 12:57:31.229696   59162 out.go:201] 
	I1202 12:57:29.064892   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:31.065451   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:33.564442   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:36.064844   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:38.065020   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:40.565467   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:43.065021   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:45.065674   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:47.565692   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:50.064566   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:52.065673   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:54.563919   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:56.565832   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:59.064489   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:01.064627   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:03.066470   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:05.565311   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:07.565342   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:10.065050   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:12.565026   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:15.065113   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:17.065377   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:19.570428   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:22.065941   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:24.564883   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:27.064907   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:29.565025   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:31.565662   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:33.566049   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:36.064675   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:38.064820   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:40.065555   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:42.565304   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:44.566076   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:47.064538   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:49.064571   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:51.064914   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:53.065942   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:55.564490   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:57.566484   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:00.064321   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:02.065385   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:04.065541   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:06.065687   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:08.564349   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:11.064985   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:13.065285   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:15.565546   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:17.569757   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:20.065490   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:22.565206   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:25.065588   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:27.065818   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:29.066671   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:31.565998   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:34.064527   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:36.064698   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:38.065158   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:40.563432   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:42.571603   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:45.065725   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:47.565321   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:50.065712   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:52.564522   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:55.065989   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:57.563712   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:59.565908   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:02.065655   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:04.564520   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:07.065360   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:09.566223   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:12.065149   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:14.564989   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:17.064321   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:19.066069   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:21.066247   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:23.564474   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:26.065294   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:28.563804   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:30.565317   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:32.565978   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:35.064896   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:37.065442   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:39.065516   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:41.565297   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:44.064849   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:46.564956   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:49.065151   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:51.065892   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:53.570359   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:56.064144   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:58.065042   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:00.065116   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:02.065474   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:04.564036   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:06.564531   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:08.565018   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:10.565163   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:13.065421   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:15.065623   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:17.564985   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:20.065093   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:22.065732   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:22.559325   61173 pod_ready.go:82] duration metric: took 4m0.000776679s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" ...
	E1202 13:01:22.559360   61173 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" (will not retry!)
	I1202 13:01:22.559393   61173 pod_ready.go:39] duration metric: took 4m12.534205059s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:01:22.559419   61173 kubeadm.go:597] duration metric: took 4m20.488585813s to restartPrimaryControlPlane
	W1202 13:01:22.559474   61173 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 13:01:22.559501   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 13:01:48.872503   61173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.312974314s)
	I1202 13:01:48.872571   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 13:01:48.893337   61173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 13:01:48.921145   61173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 13:01:48.934577   61173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 13:01:48.934594   61173 kubeadm.go:157] found existing configuration files:
	
	I1202 13:01:48.934639   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1202 13:01:48.956103   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 13:01:48.956162   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 13:01:48.967585   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1202 13:01:48.984040   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 13:01:48.984084   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 13:01:48.994049   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1202 13:01:49.003811   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 13:01:49.003859   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 13:01:49.013646   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1202 13:01:49.023003   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 13:01:49.023051   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 13:01:49.032678   61173 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 13:01:49.196294   61173 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 13:01:57.349437   61173 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 13:01:57.349497   61173 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 13:01:57.349571   61173 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 13:01:57.349740   61173 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 13:01:57.349882   61173 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 13:01:57.349976   61173 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 13:01:57.351474   61173 out.go:235]   - Generating certificates and keys ...
	I1202 13:01:57.351576   61173 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 13:01:57.351634   61173 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 13:01:57.351736   61173 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 13:01:57.351842   61173 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 13:01:57.351952   61173 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 13:01:57.352035   61173 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 13:01:57.352132   61173 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 13:01:57.352202   61173 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 13:01:57.352325   61173 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 13:01:57.352439   61173 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 13:01:57.352515   61173 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 13:01:57.352608   61173 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 13:01:57.352689   61173 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 13:01:57.352775   61173 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 13:01:57.352860   61173 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 13:01:57.352962   61173 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 13:01:57.353058   61173 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 13:01:57.353172   61173 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 13:01:57.353295   61173 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 13:01:57.354669   61173 out.go:235]   - Booting up control plane ...
	I1202 13:01:57.354756   61173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 13:01:57.354829   61173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 13:01:57.354884   61173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 13:01:57.354984   61173 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 13:01:57.355073   61173 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 13:01:57.355127   61173 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 13:01:57.355280   61173 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 13:01:57.355435   61173 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 13:01:57.355528   61173 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.24354ms
	I1202 13:01:57.355641   61173 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 13:01:57.355720   61173 kubeadm.go:310] [api-check] The API server is healthy after 5.002367533s
	I1202 13:01:57.355832   61173 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 13:01:57.355945   61173 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 13:01:57.356000   61173 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 13:01:57.356175   61173 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-653783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 13:01:57.356246   61173 kubeadm.go:310] [bootstrap-token] Using token: 0oxhck.9gzdpio1kzs08rgi
	I1202 13:01:57.357582   61173 out.go:235]   - Configuring RBAC rules ...
	I1202 13:01:57.357692   61173 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 13:01:57.357798   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 13:01:57.357973   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 13:01:57.358102   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 13:01:57.358246   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 13:01:57.358361   61173 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 13:01:57.358460   61173 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 13:01:57.358497   61173 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 13:01:57.358547   61173 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 13:01:57.358557   61173 kubeadm.go:310] 
	I1202 13:01:57.358615   61173 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 13:01:57.358625   61173 kubeadm.go:310] 
	I1202 13:01:57.358691   61173 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 13:01:57.358698   61173 kubeadm.go:310] 
	I1202 13:01:57.358730   61173 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 13:01:57.358800   61173 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 13:01:57.358878   61173 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 13:01:57.358889   61173 kubeadm.go:310] 
	I1202 13:01:57.358954   61173 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 13:01:57.358961   61173 kubeadm.go:310] 
	I1202 13:01:57.358999   61173 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 13:01:57.359005   61173 kubeadm.go:310] 
	I1202 13:01:57.359047   61173 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 13:01:57.359114   61173 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 13:01:57.359179   61173 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 13:01:57.359185   61173 kubeadm.go:310] 
	I1202 13:01:57.359271   61173 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 13:01:57.359364   61173 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 13:01:57.359377   61173 kubeadm.go:310] 
	I1202 13:01:57.359451   61173 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 0oxhck.9gzdpio1kzs08rgi \
	I1202 13:01:57.359561   61173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 13:01:57.359581   61173 kubeadm.go:310] 	--control-plane 
	I1202 13:01:57.359587   61173 kubeadm.go:310] 
	I1202 13:01:57.359666   61173 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 13:01:57.359678   61173 kubeadm.go:310] 
	I1202 13:01:57.359745   61173 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 0oxhck.9gzdpio1kzs08rgi \
	I1202 13:01:57.359848   61173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 13:01:57.359874   61173 cni.go:84] Creating CNI manager for ""
	I1202 13:01:57.359887   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 13:01:57.361282   61173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 13:01:57.362319   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 13:01:57.373455   61173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 13:01:57.393003   61173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 13:01:57.393055   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:57.393136   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653783 minikube.k8s.io/updated_at=2024_12_02T13_01_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=default-k8s-diff-port-653783 minikube.k8s.io/primary=true
	I1202 13:01:57.426483   61173 ops.go:34] apiserver oom_adj: -16
	I1202 13:01:57.584458   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:58.084831   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:58.585450   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:59.084976   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:59.585068   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:00.085470   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:00.584722   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:01.084770   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:01.585414   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:01.725480   61173 kubeadm.go:1113] duration metric: took 4.332474868s to wait for elevateKubeSystemPrivileges
	I1202 13:02:01.725523   61173 kubeadm.go:394] duration metric: took 4m59.70293206s to StartCluster
	I1202 13:02:01.725545   61173 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:02:01.725633   61173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 13:02:01.730008   61173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:02:01.730438   61173 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 13:02:01.730586   61173 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 13:02:01.730685   61173 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653783"
	I1202 13:02:01.730703   61173 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653783"
	I1202 13:02:01.730707   61173 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653783"
	I1202 13:02:01.730719   61173 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653783"
	I1202 13:02:01.730734   61173 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653783"
	I1202 13:02:01.730736   61173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653783"
	W1202 13:02:01.730746   61173 addons.go:243] addon metrics-server should already be in state true
	I1202 13:02:01.730776   61173 host.go:66] Checking if "default-k8s-diff-port-653783" exists ...
	W1202 13:02:01.730711   61173 addons.go:243] addon storage-provisioner should already be in state true
	I1202 13:02:01.730865   61173 host.go:66] Checking if "default-k8s-diff-port-653783" exists ...
	I1202 13:02:01.731186   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.731204   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.731215   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.731220   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.731235   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.731255   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.730707   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:02:01.731895   61173 out.go:177] * Verifying Kubernetes components...
	I1202 13:02:01.733515   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 13:02:01.748534   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1202 13:02:01.749156   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.749717   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.749743   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.750167   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.750734   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.750771   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.750997   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
	I1202 13:02:01.751714   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44867
	I1202 13:02:01.751911   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.752088   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.752388   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.752406   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.752785   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.753212   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.753240   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.753514   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.753527   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.753807   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.753953   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.756554   61173 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653783"
	W1202 13:02:01.756567   61173 addons.go:243] addon default-storageclass should already be in state true
	I1202 13:02:01.756588   61173 host.go:66] Checking if "default-k8s-diff-port-653783" exists ...
	I1202 13:02:01.756803   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.756824   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.769388   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37033
	I1202 13:02:01.769867   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.770303   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.770328   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.770810   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.770984   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.771974   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I1202 13:02:01.772430   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.773043   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.773068   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.773294   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 13:02:01.773441   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.773707   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.775187   61173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 13:02:01.775514   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 13:02:01.776461   61173 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 13:02:01.776482   61173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 13:02:01.776499   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 13:02:01.776562   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46841
	I1202 13:02:01.776927   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.777077   61173 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1202 13:02:01.777497   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.777509   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.777795   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.778197   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 13:02:01.778215   61173 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 13:02:01.778235   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 13:02:01.778284   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.778315   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.779324   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.780389   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 13:02:01.780472   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.780336   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 13:02:01.780832   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 13:02:01.780996   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 13:02:01.781101   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 13:02:01.781390   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.781588   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 13:02:01.781608   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.781737   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 13:02:01.781886   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 13:02:01.781973   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 13:02:01.782063   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 13:02:01.793947   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45243
	I1202 13:02:01.794298   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.794720   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.794737   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.795031   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.795200   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.796909   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 13:02:01.797092   61173 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 13:02:01.797104   61173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 13:02:01.797121   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 13:02:01.799831   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.800160   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 13:02:01.800191   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.800416   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 13:02:01.800595   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 13:02:01.800702   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 13:02:01.800823   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 13:02:01.936668   61173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 13:02:01.954328   61173 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653783" to be "Ready" ...
	I1202 13:02:01.968409   61173 node_ready.go:49] node "default-k8s-diff-port-653783" has status "Ready":"True"
	I1202 13:02:01.968427   61173 node_ready.go:38] duration metric: took 14.066432ms for node "default-k8s-diff-port-653783" to be "Ready" ...
	I1202 13:02:01.968436   61173 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:02:01.981818   61173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2stsx" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:02.071558   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 13:02:02.071590   61173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1202 13:02:02.076260   61173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 13:02:02.085318   61173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 13:02:02.098342   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 13:02:02.098363   61173 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 13:02:02.156135   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 13:02:02.156165   61173 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 13:02:02.175618   61173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 13:02:02.359810   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:02.359841   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:02.360111   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:02.360201   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:02.360179   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:02.360225   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:02.360246   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:02.360518   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:02.360528   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:02.360532   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:02.366246   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:02.366270   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:02.366633   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:02.366647   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:02.366660   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.134955   61173 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049592704s)
	I1202 13:02:03.135040   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135059   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.135084   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135114   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.135342   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.135392   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.135413   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135432   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.135533   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.135565   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.135584   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135602   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.136554   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.136558   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.136569   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.136568   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:03.136572   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.136579   61173 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-653783"
	I1202 13:02:03.138071   61173 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1202 13:02:03.139462   61173 addons.go:510] duration metric: took 1.408893663s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1202 13:02:03.986445   61173 pod_ready.go:93] pod "coredns-7c65d6cfc9-2stsx" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:03.986471   61173 pod_ready.go:82] duration metric: took 2.0046319s for pod "coredns-7c65d6cfc9-2stsx" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:03.986482   61173 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:04.492973   61173 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:04.492995   61173 pod_ready.go:82] duration metric: took 506.506566ms for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:04.493004   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:06.500118   61173 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 13:02:08.502468   61173 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 13:02:08.999764   61173 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:08.999785   61173 pod_ready.go:82] duration metric: took 4.506775084s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:08.999795   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.005354   61173 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:10.005376   61173 pod_ready.go:82] duration metric: took 1.005574607s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.005385   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d4vw4" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.010948   61173 pod_ready.go:93] pod "kube-proxy-d4vw4" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:10.010964   61173 pod_ready.go:82] duration metric: took 5.574069ms for pod "kube-proxy-d4vw4" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.010972   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.014901   61173 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:10.014918   61173 pod_ready.go:82] duration metric: took 3.938654ms for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.014927   61173 pod_ready.go:39] duration metric: took 8.046482137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:02:10.014943   61173 api_server.go:52] waiting for apiserver process to appear ...
	I1202 13:02:10.014994   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 13:02:10.032401   61173 api_server.go:72] duration metric: took 8.301924942s to wait for apiserver process to appear ...
	I1202 13:02:10.032418   61173 api_server.go:88] waiting for apiserver healthz status ...
	I1202 13:02:10.032436   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 13:02:10.036406   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 200:
	ok
	I1202 13:02:10.037035   61173 api_server.go:141] control plane version: v1.31.2
	I1202 13:02:10.037052   61173 api_server.go:131] duration metric: took 4.627223ms to wait for apiserver health ...
	I1202 13:02:10.037061   61173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 13:02:10.042707   61173 system_pods.go:59] 9 kube-system pods found
	I1202 13:02:10.042731   61173 system_pods.go:61] "coredns-7c65d6cfc9-2qfb5" [13f41c48-90af-4524-98fc-22daf331fbcb] Running
	I1202 13:02:10.042740   61173 system_pods.go:61] "coredns-7c65d6cfc9-2stsx" [3cb9697b-974e-4f8e-9931-38fe3d971940] Running
	I1202 13:02:10.042746   61173 system_pods.go:61] "etcd-default-k8s-diff-port-653783" [adfc38c0-b63b-404d-b279-03f3265f1cf6] Running
	I1202 13:02:10.042752   61173 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653783" [c09effaa-0cea-47db-aca6-8f1d6612b194] Running
	I1202 13:02:10.042758   61173 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653783" [7efc2e68-5d67-4ee7-8b00-e23124acdf63] Running
	I1202 13:02:10.042762   61173 system_pods.go:61] "kube-proxy-d4vw4" [487da76d-2fae-4df0-b663-0cf128ae2911] Running
	I1202 13:02:10.042768   61173 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653783" [94e85eeb-5304-4258-b76b-ac8eb0461069] Running
	I1202 13:02:10.042776   61173 system_pods.go:61] "metrics-server-6867b74b74-tcr8r" [2f017719-26ad-44ca-a44a-e6c20cd6438c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 13:02:10.042782   61173 system_pods.go:61] "storage-provisioner" [8975d342-96fa-4173-b477-e25909ca76da] Running
	I1202 13:02:10.042794   61173 system_pods.go:74] duration metric: took 5.724009ms to wait for pod list to return data ...
	I1202 13:02:10.042800   61173 default_sa.go:34] waiting for default service account to be created ...
	I1202 13:02:10.045407   61173 default_sa.go:45] found service account: "default"
	I1202 13:02:10.045422   61173 default_sa.go:55] duration metric: took 2.615305ms for default service account to be created ...
	I1202 13:02:10.045428   61173 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 13:02:10.050473   61173 system_pods.go:86] 9 kube-system pods found
	I1202 13:02:10.050494   61173 system_pods.go:89] "coredns-7c65d6cfc9-2qfb5" [13f41c48-90af-4524-98fc-22daf331fbcb] Running
	I1202 13:02:10.050499   61173 system_pods.go:89] "coredns-7c65d6cfc9-2stsx" [3cb9697b-974e-4f8e-9931-38fe3d971940] Running
	I1202 13:02:10.050505   61173 system_pods.go:89] "etcd-default-k8s-diff-port-653783" [adfc38c0-b63b-404d-b279-03f3265f1cf6] Running
	I1202 13:02:10.050510   61173 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653783" [c09effaa-0cea-47db-aca6-8f1d6612b194] Running
	I1202 13:02:10.050514   61173 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653783" [7efc2e68-5d67-4ee7-8b00-e23124acdf63] Running
	I1202 13:02:10.050518   61173 system_pods.go:89] "kube-proxy-d4vw4" [487da76d-2fae-4df0-b663-0cf128ae2911] Running
	I1202 13:02:10.050526   61173 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653783" [94e85eeb-5304-4258-b76b-ac8eb0461069] Running
	I1202 13:02:10.050532   61173 system_pods.go:89] "metrics-server-6867b74b74-tcr8r" [2f017719-26ad-44ca-a44a-e6c20cd6438c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 13:02:10.050540   61173 system_pods.go:89] "storage-provisioner" [8975d342-96fa-4173-b477-e25909ca76da] Running
	I1202 13:02:10.050547   61173 system_pods.go:126] duration metric: took 5.115018ms to wait for k8s-apps to be running ...
	I1202 13:02:10.050552   61173 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 13:02:10.050588   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 13:02:10.065454   61173 system_svc.go:56] duration metric: took 14.89671ms WaitForService to wait for kubelet
	I1202 13:02:10.065475   61173 kubeadm.go:582] duration metric: took 8.335001135s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 13:02:10.065490   61173 node_conditions.go:102] verifying NodePressure condition ...
	I1202 13:02:10.199102   61173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 13:02:10.199123   61173 node_conditions.go:123] node cpu capacity is 2
	I1202 13:02:10.199136   61173 node_conditions.go:105] duration metric: took 133.639645ms to run NodePressure ...
	I1202 13:02:10.199148   61173 start.go:241] waiting for startup goroutines ...
	I1202 13:02:10.199156   61173 start.go:246] waiting for cluster config update ...
	I1202 13:02:10.199167   61173 start.go:255] writing updated cluster config ...
	I1202 13:02:10.199421   61173 ssh_runner.go:195] Run: rm -f paused
	I1202 13:02:10.246194   61173 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 13:02:10.248146   61173 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653783" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.585351900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144593585331363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5942b21-5d5d-41c0-8487-d7290d62f066 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.585849098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed301c1b-32fd-4b01-b2ef-b2d68ae0e55b name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.585917506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed301c1b-32fd-4b01-b2ef-b2d68ae0e55b name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.586181399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d08817fb6c3d9679814ecf7ff46ed4f764eafae34fd476a22968648a2a9f5043,PodSandboxId:48fd491b98c0568aa6b75d40298a390a2d43ecef5297e4d30d33dcfb851af493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045435084166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tm4ct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109d2f58-c2c8-4bf0-8232-fdbeb078305d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f046b95d54fed68d1b3f71d59321aaf977618d50bb50fed2927d328f30e84e24,PodSandboxId:a862c8481f4a3d9c26824413093fe931339d929799e46310f650a607514e0739,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045458111475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fwt6z,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06a23976-b261-4baa-8f66-e966addfb41a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7cadda79ea1420beed17bbabb5dfba27858bc0cc5540f25c96cda3d611fe8b,PodSandboxId:438fcd4ea84408bce624479349b08f5321a6833156a244da2fc211ed75379d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1733144043927439884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fdd473-75b2-41d6-95bf-1bcab189dae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb21e7d976f8638f1e2c1a42e5086c5be15812bd6c97067fdba75d9dd749c93,PodSandboxId:2f82a659a255ec373a7a17d8ccce5c55d9f938993185d172afe6be1c2879ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733144043529857677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kg4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6b74e9c-47e4-4b1c-a219-685cc119219b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbeab4124925b0160dd7aa651dcfded39bfa15ec4aee177a9786ca009986112,PodSandboxId:13a76fba8eeb5e1fa84e7b28abb201e640193fb570339cb2608ee55da8c04543,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144032700295498,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52f86423e6fd1bda098a1bcfd3df2272,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc603f56c0eda934cbb56ba9ae62f3cc2beba1efab239b32c2be6ac920ca3bc5,PodSandboxId:1c95b5e0c51835b15a5afc489880a759dc39ec2f8cf417bdb8ff59d06c2cb6cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733144032698169445,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e1be5264f0f225f54bf06a3e08f300,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b691ba9ee672eb0217cd4de27f6401268a28aaf1eb9ac64e2865ca6e721695ad,PodSandboxId:c7fcc106b9666b83488f635c1d6f5266dac2aa11a57544ab67adf0361c664e6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144032674791962,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1fddd0b99936e33b9c983dce9c24768d5179373f466b4c1c42526db19d4824,PodSandboxId:278a05742631cb6ed533592bb0afd6fc9140a4ab6556818eea667165bde48fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144032630564404,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 722b8ca126b547dea166a1be58f44cfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57472bf615dac95d586758dd345e63ded56aa2208341a6fdaecd827829db8db5,PodSandboxId:ae6892854beae8cb5c98933cf0119c37a3b3b0aef596779a18bc1a3bdc819b86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733143750864052824,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed301c1b-32fd-4b01-b2ef-b2d68ae0e55b name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.627920858Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cdb6a3ce-eb06-4dbc-a327-d133cd2c283b name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.628075608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cdb6a3ce-eb06-4dbc-a327-d133cd2c283b name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.629546799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68104610-d8a9-41a5-b2c2-9468bb9dbf4e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.630224796Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144593630180703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68104610-d8a9-41a5-b2c2-9468bb9dbf4e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.630927736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=004e99c5-c6ff-47fd-a5ee-56f574091526 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.631086704Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=004e99c5-c6ff-47fd-a5ee-56f574091526 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.631449715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d08817fb6c3d9679814ecf7ff46ed4f764eafae34fd476a22968648a2a9f5043,PodSandboxId:48fd491b98c0568aa6b75d40298a390a2d43ecef5297e4d30d33dcfb851af493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045435084166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tm4ct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109d2f58-c2c8-4bf0-8232-fdbeb078305d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f046b95d54fed68d1b3f71d59321aaf977618d50bb50fed2927d328f30e84e24,PodSandboxId:a862c8481f4a3d9c26824413093fe931339d929799e46310f650a607514e0739,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045458111475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fwt6z,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06a23976-b261-4baa-8f66-e966addfb41a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7cadda79ea1420beed17bbabb5dfba27858bc0cc5540f25c96cda3d611fe8b,PodSandboxId:438fcd4ea84408bce624479349b08f5321a6833156a244da2fc211ed75379d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1733144043927439884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fdd473-75b2-41d6-95bf-1bcab189dae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb21e7d976f8638f1e2c1a42e5086c5be15812bd6c97067fdba75d9dd749c93,PodSandboxId:2f82a659a255ec373a7a17d8ccce5c55d9f938993185d172afe6be1c2879ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733144043529857677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kg4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6b74e9c-47e4-4b1c-a219-685cc119219b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbeab4124925b0160dd7aa651dcfded39bfa15ec4aee177a9786ca009986112,PodSandboxId:13a76fba8eeb5e1fa84e7b28abb201e640193fb570339cb2608ee55da8c04543,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144032700295498,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52f86423e6fd1bda098a1bcfd3df2272,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc603f56c0eda934cbb56ba9ae62f3cc2beba1efab239b32c2be6ac920ca3bc5,PodSandboxId:1c95b5e0c51835b15a5afc489880a759dc39ec2f8cf417bdb8ff59d06c2cb6cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733144032698169445,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e1be5264f0f225f54bf06a3e08f300,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b691ba9ee672eb0217cd4de27f6401268a28aaf1eb9ac64e2865ca6e721695ad,PodSandboxId:c7fcc106b9666b83488f635c1d6f5266dac2aa11a57544ab67adf0361c664e6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144032674791962,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1fddd0b99936e33b9c983dce9c24768d5179373f466b4c1c42526db19d4824,PodSandboxId:278a05742631cb6ed533592bb0afd6fc9140a4ab6556818eea667165bde48fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144032630564404,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 722b8ca126b547dea166a1be58f44cfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57472bf615dac95d586758dd345e63ded56aa2208341a6fdaecd827829db8db5,PodSandboxId:ae6892854beae8cb5c98933cf0119c37a3b3b0aef596779a18bc1a3bdc819b86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733143750864052824,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=004e99c5-c6ff-47fd-a5ee-56f574091526 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.668439965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8bf8e45-96b7-4be1-b79a-f811161a8179 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.668505364Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8bf8e45-96b7-4be1-b79a-f811161a8179 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.669634517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88b4745e-bd0a-4ad6-84f3-fd13a2233b7c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.670246449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144593670126387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88b4745e-bd0a-4ad6-84f3-fd13a2233b7c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.671132390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c80ce2bc-9b1d-4870-8b38-559783ded738 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.671185475Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c80ce2bc-9b1d-4870-8b38-559783ded738 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.671383328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d08817fb6c3d9679814ecf7ff46ed4f764eafae34fd476a22968648a2a9f5043,PodSandboxId:48fd491b98c0568aa6b75d40298a390a2d43ecef5297e4d30d33dcfb851af493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045435084166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tm4ct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109d2f58-c2c8-4bf0-8232-fdbeb078305d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f046b95d54fed68d1b3f71d59321aaf977618d50bb50fed2927d328f30e84e24,PodSandboxId:a862c8481f4a3d9c26824413093fe931339d929799e46310f650a607514e0739,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045458111475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fwt6z,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06a23976-b261-4baa-8f66-e966addfb41a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7cadda79ea1420beed17bbabb5dfba27858bc0cc5540f25c96cda3d611fe8b,PodSandboxId:438fcd4ea84408bce624479349b08f5321a6833156a244da2fc211ed75379d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1733144043927439884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fdd473-75b2-41d6-95bf-1bcab189dae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb21e7d976f8638f1e2c1a42e5086c5be15812bd6c97067fdba75d9dd749c93,PodSandboxId:2f82a659a255ec373a7a17d8ccce5c55d9f938993185d172afe6be1c2879ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733144043529857677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kg4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6b74e9c-47e4-4b1c-a219-685cc119219b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbeab4124925b0160dd7aa651dcfded39bfa15ec4aee177a9786ca009986112,PodSandboxId:13a76fba8eeb5e1fa84e7b28abb201e640193fb570339cb2608ee55da8c04543,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144032700295498,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52f86423e6fd1bda098a1bcfd3df2272,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc603f56c0eda934cbb56ba9ae62f3cc2beba1efab239b32c2be6ac920ca3bc5,PodSandboxId:1c95b5e0c51835b15a5afc489880a759dc39ec2f8cf417bdb8ff59d06c2cb6cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733144032698169445,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e1be5264f0f225f54bf06a3e08f300,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b691ba9ee672eb0217cd4de27f6401268a28aaf1eb9ac64e2865ca6e721695ad,PodSandboxId:c7fcc106b9666b83488f635c1d6f5266dac2aa11a57544ab67adf0361c664e6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144032674791962,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1fddd0b99936e33b9c983dce9c24768d5179373f466b4c1c42526db19d4824,PodSandboxId:278a05742631cb6ed533592bb0afd6fc9140a4ab6556818eea667165bde48fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144032630564404,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 722b8ca126b547dea166a1be58f44cfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57472bf615dac95d586758dd345e63ded56aa2208341a6fdaecd827829db8db5,PodSandboxId:ae6892854beae8cb5c98933cf0119c37a3b3b0aef596779a18bc1a3bdc819b86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733143750864052824,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c80ce2bc-9b1d-4870-8b38-559783ded738 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.704750531Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3dbf40f3-3a3e-49c9-9fef-a4e46cd3a8f6 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.704859247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3dbf40f3-3a3e-49c9-9fef-a4e46cd3a8f6 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.706418376Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61f7e957-7129-4785-8303-2cec8b505722 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.707178653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144593707149314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61f7e957-7129-4785-8303-2cec8b505722 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.707836810Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75b8b934-cad5-47a8-b9ec-7c3667c3d1ff name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.707903511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75b8b934-cad5-47a8-b9ec-7c3667c3d1ff name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:13 embed-certs-953044 crio[710]: time="2024-12-02 13:03:13.708129844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d08817fb6c3d9679814ecf7ff46ed4f764eafae34fd476a22968648a2a9f5043,PodSandboxId:48fd491b98c0568aa6b75d40298a390a2d43ecef5297e4d30d33dcfb851af493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045435084166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tm4ct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109d2f58-c2c8-4bf0-8232-fdbeb078305d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f046b95d54fed68d1b3f71d59321aaf977618d50bb50fed2927d328f30e84e24,PodSandboxId:a862c8481f4a3d9c26824413093fe931339d929799e46310f650a607514e0739,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045458111475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fwt6z,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06a23976-b261-4baa-8f66-e966addfb41a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7cadda79ea1420beed17bbabb5dfba27858bc0cc5540f25c96cda3d611fe8b,PodSandboxId:438fcd4ea84408bce624479349b08f5321a6833156a244da2fc211ed75379d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1733144043927439884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fdd473-75b2-41d6-95bf-1bcab189dae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb21e7d976f8638f1e2c1a42e5086c5be15812bd6c97067fdba75d9dd749c93,PodSandboxId:2f82a659a255ec373a7a17d8ccce5c55d9f938993185d172afe6be1c2879ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733144043529857677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kg4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6b74e9c-47e4-4b1c-a219-685cc119219b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbeab4124925b0160dd7aa651dcfded39bfa15ec4aee177a9786ca009986112,PodSandboxId:13a76fba8eeb5e1fa84e7b28abb201e640193fb570339cb2608ee55da8c04543,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144032700295498,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52f86423e6fd1bda098a1bcfd3df2272,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc603f56c0eda934cbb56ba9ae62f3cc2beba1efab239b32c2be6ac920ca3bc5,PodSandboxId:1c95b5e0c51835b15a5afc489880a759dc39ec2f8cf417bdb8ff59d06c2cb6cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733144032698169445,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e1be5264f0f225f54bf06a3e08f300,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b691ba9ee672eb0217cd4de27f6401268a28aaf1eb9ac64e2865ca6e721695ad,PodSandboxId:c7fcc106b9666b83488f635c1d6f5266dac2aa11a57544ab67adf0361c664e6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144032674791962,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1fddd0b99936e33b9c983dce9c24768d5179373f466b4c1c42526db19d4824,PodSandboxId:278a05742631cb6ed533592bb0afd6fc9140a4ab6556818eea667165bde48fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144032630564404,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 722b8ca126b547dea166a1be58f44cfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57472bf615dac95d586758dd345e63ded56aa2208341a6fdaecd827829db8db5,PodSandboxId:ae6892854beae8cb5c98933cf0119c37a3b3b0aef596779a18bc1a3bdc819b86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733143750864052824,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75b8b934-cad5-47a8-b9ec-7c3667c3d1ff name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f046b95d54fed       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   a862c8481f4a3       coredns-7c65d6cfc9-fwt6z
	d08817fb6c3d9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   48fd491b98c05       coredns-7c65d6cfc9-tm4ct
	0b7cadda79ea1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   438fcd4ea8440       storage-provisioner
	0cb21e7d976f8       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   2f82a659a255e       kube-proxy-kg4z6
	1cbeab4124925       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   13a76fba8eeb5       etcd-embed-certs-953044
	cc603f56c0eda       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   1c95b5e0c5183       kube-scheduler-embed-certs-953044
	b691ba9ee672e       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   c7fcc106b9666       kube-apiserver-embed-certs-953044
	ae1fddd0b9993       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   278a05742631c       kube-controller-manager-embed-certs-953044
	57472bf615dac       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   ae6892854beae       kube-apiserver-embed-certs-953044
	
	
	==> coredns [d08817fb6c3d9679814ecf7ff46ed4f764eafae34fd476a22968648a2a9f5043] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f046b95d54fed68d1b3f71d59321aaf977618d50bb50fed2927d328f30e84e24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-953044
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-953044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=embed-certs-953044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T12_53_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 12:53:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-953044
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 13:03:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 12:59:14 +0000   Mon, 02 Dec 2024 12:53:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 12:59:14 +0000   Mon, 02 Dec 2024 12:53:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 12:59:14 +0000   Mon, 02 Dec 2024 12:53:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 12:59:14 +0000   Mon, 02 Dec 2024 12:53:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.203
	  Hostname:    embed-certs-953044
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df6cc10471794eaba57ca35f6b869cf8
	  System UUID:                df6cc104-7179-4eab-a57c-a35f6b869cf8
	  Boot ID:                    19542c91-0491-4a31-9489-18c0c582728d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-fwt6z                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-7c65d6cfc9-tm4ct                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-embed-certs-953044                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-embed-certs-953044             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-embed-certs-953044    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-kg4z6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-embed-certs-953044             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-fwhvq               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m21s (x8 over 9m21s)  kubelet          Node embed-certs-953044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s (x8 over 9m21s)  kubelet          Node embed-certs-953044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s (x7 over 9m21s)  kubelet          Node embed-certs-953044 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s                  kubelet          Node embed-certs-953044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s                  kubelet          Node embed-certs-953044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s                  kubelet          Node embed-certs-953044 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s                  node-controller  Node embed-certs-953044 event: Registered Node embed-certs-953044 in Controller
	
	
	==> dmesg <==
	[  +0.039923] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.041424] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.780129] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.632035] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 2 12:49] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.064141] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080791] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.209584] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.169534] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.327642] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[  +4.462982] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +0.065614] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.961439] systemd-fstab-generator[911]: Ignoring "noauto" option for root device
	[  +5.665421] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.267782] kauditd_printk_skb: 54 callbacks suppressed
	[ +23.609016] kauditd_printk_skb: 31 callbacks suppressed
	[Dec 2 12:53] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.242082] systemd-fstab-generator[2649]: Ignoring "noauto" option for root device
	[  +4.498456] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.553578] systemd-fstab-generator[2972]: Ignoring "noauto" option for root device
	[Dec 2 12:54] systemd-fstab-generator[3101]: Ignoring "noauto" option for root device
	[  +0.083707] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.776886] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [1cbeab4124925b0160dd7aa651dcfded39bfa15ec4aee177a9786ca009986112] <==
	{"level":"info","ts":"2024-12-02T12:53:53.146296Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-02T12:53:53.147030Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"fd1c782511c6d1a","initial-advertise-peer-urls":["https://192.168.72.203:2380"],"listen-peer-urls":["https://192.168.72.203:2380"],"advertise-client-urls":["https://192.168.72.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-02T12:53:53.147142Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-02T12:53:53.146589Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.203:2380"}
	{"level":"info","ts":"2024-12-02T12:53:53.147272Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.203:2380"}
	{"level":"info","ts":"2024-12-02T12:53:53.976038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-02T12:53:53.976139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-02T12:53:53.976190Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a received MsgPreVoteResp from fd1c782511c6d1a at term 1"}
	{"level":"info","ts":"2024-12-02T12:53:53.976223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a became candidate at term 2"}
	{"level":"info","ts":"2024-12-02T12:53:53.976247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a received MsgVoteResp from fd1c782511c6d1a at term 2"}
	{"level":"info","ts":"2024-12-02T12:53:53.976274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a became leader at term 2"}
	{"level":"info","ts":"2024-12-02T12:53:53.976299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fd1c782511c6d1a elected leader fd1c782511c6d1a at term 2"}
	{"level":"info","ts":"2024-12-02T12:53:53.980235Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:53:53.981220Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"fd1c782511c6d1a","local-member-attributes":"{Name:embed-certs-953044 ClientURLs:[https://192.168.72.203:2379]}","request-path":"/0/members/fd1c782511c6d1a/attributes","cluster-id":"e420fb3f9edbaec1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-02T12:53:53.981680Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e420fb3f9edbaec1","local-member-id":"fd1c782511c6d1a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:53:53.981769Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:53:53.981805Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:53:53.981815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T12:53:53.981782Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T12:53:53.982821Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-02T12:53:53.983585Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-02T12:53:53.983655Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-02T12:53:53.983686Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-02T12:53:53.984467Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-02T12:53:53.988718Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.203:2379"}
	
	
	==> kernel <==
	 13:03:14 up 14 min,  0 users,  load average: 0.26, 0.21, 0.13
	Linux embed-certs-953044 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [57472bf615dac95d586758dd345e63ded56aa2208341a6fdaecd827829db8db5] <==
	W1202 12:53:49.289814       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.297259       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.307105       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.314616       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.324305       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.378572       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.405174       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.409788       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.411098       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.437426       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.506217       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.526656       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.569265       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.582033       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.584310       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.650865       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.696493       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.712148       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.716560       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.776710       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.917391       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.972749       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:50.050420       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:50.069759       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:50.137282       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b691ba9ee672eb0217cd4de27f6401268a28aaf1eb9ac64e2865ca6e721695ad] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1202 12:58:56.404365       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 12:58:56.404535       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1202 12:58:56.405474       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 12:58:56.406634       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 12:59:56.406700       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 12:59:56.406771       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1202 12:59:56.406789       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 12:59:56.406815       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 12:59:56.408010       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 12:59:56.408110       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 13:01:56.408653       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:01:56.408897       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1202 13:01:56.408707       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:01:56.409038       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1202 13:01:56.410318       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:01:56.410531       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ae1fddd0b99936e33b9c983dce9c24768d5179373f466b4c1c42526db19d4824] <==
	E1202 12:58:02.325463       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 12:58:02.847519       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 12:58:32.332800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 12:58:32.855514       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 12:59:02.340488       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 12:59:02.863540       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 12:59:14.201832       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-953044"
	E1202 12:59:32.349210       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 12:59:32.871834       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 12:59:57.956911       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="302.082µs"
	E1202 13:00:02.356266       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:00:02.880284       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:00:11.956459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="541.202µs"
	E1202 13:00:32.363666       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:00:32.888037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:01:02.370654       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:01:02.896391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:01:32.382774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:01:32.903415       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:02:02.391028       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:02:02.912005       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:02:32.399087       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:02:32.919918       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:03:02.406898       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:03:02.928405       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0cb21e7d976f8638f1e2c1a42e5086c5be15812bd6c97067fdba75d9dd749c93] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1202 12:54:03.875062       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1202 12:54:03.894940       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.203"]
	E1202 12:54:03.895285       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 12:54:03.979534       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1202 12:54:03.979562       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 12:54:03.979593       1 server_linux.go:169] "Using iptables Proxier"
	I1202 12:54:03.984208       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 12:54:03.984513       1 server.go:483] "Version info" version="v1.31.2"
	I1202 12:54:03.984524       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 12:54:03.985812       1 config.go:199] "Starting service config controller"
	I1202 12:54:03.985828       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 12:54:03.985848       1 config.go:105] "Starting endpoint slice config controller"
	I1202 12:54:03.985852       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 12:54:03.993138       1 config.go:328] "Starting node config controller"
	I1202 12:54:03.993153       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 12:54:04.090859       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 12:54:04.090938       1 shared_informer.go:320] Caches are synced for service config
	I1202 12:54:04.093185       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cc603f56c0eda934cbb56ba9ae62f3cc2beba1efab239b32c2be6ac920ca3bc5] <==
	W1202 12:53:55.437363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 12:53:55.437408       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:55.437521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 12:53:55.437554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:55.437601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1202 12:53:55.437638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.372083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1202 12:53:56.372221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.417187       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1202 12:53:56.417363       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.448615       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1202 12:53:56.448728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.457537       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1202 12:53:56.457640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.473425       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 12:53:56.473611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.502355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 12:53:56.502483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.556282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 12:53:56.556363       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.657597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1202 12:53:56.657755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.679630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 12:53:56.679707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1202 12:53:57.028741       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 13:02:00 embed-certs-953044 kubelet[2979]: E1202 13:02:00.940189    2979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fwhvq" podUID="e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f"
	Dec 02 13:02:08 embed-certs-953044 kubelet[2979]: E1202 13:02:08.049110    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144528048709583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:08 embed-certs-953044 kubelet[2979]: E1202 13:02:08.049175    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144528048709583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:13 embed-certs-953044 kubelet[2979]: E1202 13:02:13.940728    2979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fwhvq" podUID="e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f"
	Dec 02 13:02:18 embed-certs-953044 kubelet[2979]: E1202 13:02:18.050876    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144538050193957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:18 embed-certs-953044 kubelet[2979]: E1202 13:02:18.051332    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144538050193957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:26 embed-certs-953044 kubelet[2979]: E1202 13:02:26.939103    2979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fwhvq" podUID="e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f"
	Dec 02 13:02:28 embed-certs-953044 kubelet[2979]: E1202 13:02:28.053021    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144548052613436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:28 embed-certs-953044 kubelet[2979]: E1202 13:02:28.053134    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144548052613436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:38 embed-certs-953044 kubelet[2979]: E1202 13:02:38.054652    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144558054341250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:38 embed-certs-953044 kubelet[2979]: E1202 13:02:38.054674    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144558054341250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:38 embed-certs-953044 kubelet[2979]: E1202 13:02:38.938836    2979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fwhvq" podUID="e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f"
	Dec 02 13:02:48 embed-certs-953044 kubelet[2979]: E1202 13:02:48.057113    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144568056545757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:48 embed-certs-953044 kubelet[2979]: E1202 13:02:48.059490    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144568056545757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:51 embed-certs-953044 kubelet[2979]: E1202 13:02:51.940322    2979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fwhvq" podUID="e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f"
	Dec 02 13:02:57 embed-certs-953044 kubelet[2979]: E1202 13:02:57.970315    2979 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 02 13:02:57 embed-certs-953044 kubelet[2979]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 13:02:57 embed-certs-953044 kubelet[2979]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 13:02:57 embed-certs-953044 kubelet[2979]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 13:02:57 embed-certs-953044 kubelet[2979]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 13:02:58 embed-certs-953044 kubelet[2979]: E1202 13:02:58.060744    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144578060401245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:58 embed-certs-953044 kubelet[2979]: E1202 13:02:58.060769    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144578060401245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:03:05 embed-certs-953044 kubelet[2979]: E1202 13:03:05.938589    2979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fwhvq" podUID="e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f"
	Dec 02 13:03:08 embed-certs-953044 kubelet[2979]: E1202 13:03:08.062657    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144588061920027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:03:08 embed-certs-953044 kubelet[2979]: E1202 13:03:08.063005    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144588061920027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0b7cadda79ea1420beed17bbabb5dfba27858bc0cc5540f25c96cda3d611fe8b] <==
	I1202 12:54:04.066697       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 12:54:04.081000       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 12:54:04.081782       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 12:54:04.099234       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 12:54:04.099279       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1ca092b-593c-414f-b3fd-59e5dbde38d3", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-953044_b6a066d7-5c71-4881-9a27-88e4332c6dee became leader
	I1202 12:54:04.099379       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-953044_b6a066d7-5c71-4881-9a27-88e4332c6dee!
	I1202 12:54:04.200504       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-953044_b6a066d7-5c71-4881-9a27-88e4332c6dee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-953044 -n embed-certs-953044
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-953044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-fwhvq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-953044 describe pod metrics-server-6867b74b74-fwhvq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-953044 describe pod metrics-server-6867b74b74-fwhvq: exit status 1 (60.486228ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-fwhvq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-953044 describe pod metrics-server-6867b74b74-fwhvq: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1202 12:55:01.370235   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:56:24.448513   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-658679 -n no-preload-658679
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-02 13:03:33.442877271 +0000 UTC m=+5592.274662276
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658679 -n no-preload-658679
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-658679 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-658679 logs -n 25: (1.295319041s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p newest-cni-983490 --memory=2200 --alsologtostderr   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-953044            | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-983490             | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-983490                  | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-983490 --memory=2200 --alsologtostderr   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658679                  | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658679                                   | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-983490 image list                           | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	| delete  | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:49 UTC |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-666766        | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-953044                 | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666766             | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653783  | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC | 02 Dec 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC |                     |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653783       | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC | 02 Dec 24 13:02 UTC |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 12:51:53
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 12:51:53.986642   61173 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:51:53.986878   61173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:51:53.986887   61173 out.go:358] Setting ErrFile to fd 2...
	I1202 12:51:53.986891   61173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:51:53.987040   61173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:51:53.987531   61173 out.go:352] Setting JSON to false
	I1202 12:51:53.988496   61173 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5666,"bootTime":1733138248,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:51:53.988587   61173 start.go:139] virtualization: kvm guest
	I1202 12:51:53.990552   61173 out.go:177] * [default-k8s-diff-port-653783] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:51:53.991681   61173 notify.go:220] Checking for updates...
	I1202 12:51:53.991692   61173 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:51:53.992827   61173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:51:53.993900   61173 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:51:53.995110   61173 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:51:53.996273   61173 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:51:53.997326   61173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:51:53.998910   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:51:53.999556   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:53.999630   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.014837   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I1202 12:51:54.015203   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.015691   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.015717   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.016024   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.016213   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.016420   61173 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:51:54.016702   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:54.016740   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.031103   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43443
	I1202 12:51:54.031480   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.031846   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.031862   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.032152   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.032313   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.066052   61173 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 12:51:54.067269   61173 start.go:297] selected driver: kvm2
	I1202 12:51:54.067282   61173 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:51:54.067398   61173 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:51:54.068083   61173 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:51:54.068159   61173 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 12:51:54.082839   61173 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 12:51:54.083361   61173 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:51:54.083405   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:51:54.083450   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:51:54.083491   61173 start.go:340] cluster config:
	{Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:51:54.083581   61173 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:51:54.085236   61173 out.go:177] * Starting "default-k8s-diff-port-653783" primary control-plane node in "default-k8s-diff-port-653783" cluster
	I1202 12:51:54.086247   61173 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:51:54.086275   61173 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 12:51:54.086281   61173 cache.go:56] Caching tarball of preloaded images
	I1202 12:51:54.086363   61173 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 12:51:54.086377   61173 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 12:51:54.086471   61173 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/config.json ...
	I1202 12:51:54.086683   61173 start.go:360] acquireMachinesLock for default-k8s-diff-port-653783: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:51:54.086721   61173 start.go:364] duration metric: took 21.68µs to acquireMachinesLock for "default-k8s-diff-port-653783"
	I1202 12:51:54.086742   61173 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:51:54.086750   61173 fix.go:54] fixHost starting: 
	I1202 12:51:54.087016   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:54.087049   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.100439   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I1202 12:51:54.100860   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.101284   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.101305   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.101699   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.101899   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.102027   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 12:51:54.103398   61173 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653783: state=Running err=<nil>
	W1202 12:51:54.103428   61173 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:51:54.104862   61173 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-653783" VM ...
	I1202 12:51:51.250214   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:53.251543   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:55.251624   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:54.384562   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:54.397979   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:54.398032   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:54.431942   59162 cri.go:89] found id: ""
	I1202 12:51:54.431965   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.431973   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:54.431979   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:54.432024   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:54.466033   59162 cri.go:89] found id: ""
	I1202 12:51:54.466054   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.466062   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:54.466067   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:54.466116   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:54.506462   59162 cri.go:89] found id: ""
	I1202 12:51:54.506486   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.506493   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:54.506499   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:54.506545   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:54.539966   59162 cri.go:89] found id: ""
	I1202 12:51:54.539996   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.540006   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:54.540013   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:54.540068   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:54.572987   59162 cri.go:89] found id: ""
	I1202 12:51:54.573027   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.573038   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:54.573046   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:54.573107   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:54.609495   59162 cri.go:89] found id: ""
	I1202 12:51:54.609528   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.609539   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:54.609547   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:54.609593   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:54.643109   59162 cri.go:89] found id: ""
	I1202 12:51:54.643136   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.643148   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:54.643205   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:54.643279   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:54.681113   59162 cri.go:89] found id: ""
	I1202 12:51:54.681151   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.681160   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:54.681168   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:54.681180   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:54.734777   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:54.734806   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:54.748171   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:54.748196   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:54.821609   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:54.821628   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:54.821642   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:54.900306   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:54.900339   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:57.438971   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:57.454128   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:57.454187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:57.489852   59162 cri.go:89] found id: ""
	I1202 12:51:57.489877   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.489885   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:57.489890   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:57.489938   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:57.523496   59162 cri.go:89] found id: ""
	I1202 12:51:57.523515   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.523522   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:57.523528   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:57.523576   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:57.554394   59162 cri.go:89] found id: ""
	I1202 12:51:57.554417   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.554429   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:57.554436   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:57.554497   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:57.586259   59162 cri.go:89] found id: ""
	I1202 12:51:57.586281   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.586291   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:57.586298   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:57.586353   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:57.618406   59162 cri.go:89] found id: ""
	I1202 12:51:57.618427   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.618435   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:57.618440   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:57.618482   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:57.649491   59162 cri.go:89] found id: ""
	I1202 12:51:57.649517   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.649527   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:57.649532   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:57.649575   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:57.682286   59162 cri.go:89] found id: ""
	I1202 12:51:57.682306   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.682313   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:57.682319   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:57.682364   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:57.720929   59162 cri.go:89] found id: ""
	I1202 12:51:57.720956   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.720967   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:57.720977   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:57.720987   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:57.802270   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:57.802302   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:57.841214   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:57.841246   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:57.893691   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:57.893724   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:57.906616   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:57.906640   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:57.973328   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:54.153852   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:56.653113   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:54.105934   61173 machine.go:93] provisionDockerMachine start ...
	I1202 12:51:54.105950   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.106120   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:51:54.108454   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:51:54.108866   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:48:33 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:51:54.108899   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:51:54.109032   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:51:54.109170   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:51:54.109328   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:51:54.109487   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:51:54.109662   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:51:54.109863   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:51:54.109875   61173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:51:57.012461   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:51:57.751276   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.250936   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.473500   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:00.487912   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:00.487973   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:00.526513   59162 cri.go:89] found id: ""
	I1202 12:52:00.526539   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.526548   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:00.526557   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:00.526620   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:00.561483   59162 cri.go:89] found id: ""
	I1202 12:52:00.561511   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.561519   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:00.561526   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:00.561583   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:00.592435   59162 cri.go:89] found id: ""
	I1202 12:52:00.592473   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.592484   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:00.592491   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:00.592551   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:00.624686   59162 cri.go:89] found id: ""
	I1202 12:52:00.624710   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.624722   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:00.624727   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:00.624771   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:00.662610   59162 cri.go:89] found id: ""
	I1202 12:52:00.662639   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.662650   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:00.662657   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:00.662721   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:00.695972   59162 cri.go:89] found id: ""
	I1202 12:52:00.695993   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.696000   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:00.696006   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:00.696048   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:00.727200   59162 cri.go:89] found id: ""
	I1202 12:52:00.727230   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.727253   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:00.727261   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:00.727316   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:00.761510   59162 cri.go:89] found id: ""
	I1202 12:52:00.761536   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.761545   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:00.761556   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:00.761568   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:00.812287   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:00.812318   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:00.825282   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:00.825309   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:00.894016   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:00.894042   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:00.894065   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:00.972001   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:00.972034   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:59.152373   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:01.153532   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:03.653266   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.084529   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:02.751465   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:04.752349   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:03.512982   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:03.528814   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:03.528884   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:03.564137   59162 cri.go:89] found id: ""
	I1202 12:52:03.564159   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.564166   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:03.564173   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:03.564223   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:03.608780   59162 cri.go:89] found id: ""
	I1202 12:52:03.608811   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.608822   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:03.608829   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:03.608891   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:03.644906   59162 cri.go:89] found id: ""
	I1202 12:52:03.644943   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.644954   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:03.644978   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:03.645052   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:03.676732   59162 cri.go:89] found id: ""
	I1202 12:52:03.676754   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.676761   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:03.676767   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:03.676809   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:03.711338   59162 cri.go:89] found id: ""
	I1202 12:52:03.711362   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.711369   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:03.711375   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:03.711424   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:03.743657   59162 cri.go:89] found id: ""
	I1202 12:52:03.743682   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.743689   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:03.743694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:03.743737   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:03.777740   59162 cri.go:89] found id: ""
	I1202 12:52:03.777759   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.777766   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:03.777772   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:03.777818   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:03.811145   59162 cri.go:89] found id: ""
	I1202 12:52:03.811169   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.811179   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:03.811190   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:03.811204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:03.862069   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:03.862093   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:03.875133   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:03.875164   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:03.947077   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:03.947102   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:03.947114   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:04.023458   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:04.023487   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:06.562323   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:06.577498   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:06.577556   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:06.613937   59162 cri.go:89] found id: ""
	I1202 12:52:06.613962   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.613970   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:06.613976   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:06.614023   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:06.647630   59162 cri.go:89] found id: ""
	I1202 12:52:06.647655   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.647662   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:06.647667   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:06.647711   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:06.683758   59162 cri.go:89] found id: ""
	I1202 12:52:06.683783   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.683793   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:06.683800   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:06.683861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:06.722664   59162 cri.go:89] found id: ""
	I1202 12:52:06.722686   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.722694   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:06.722699   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:06.722747   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:06.756255   59162 cri.go:89] found id: ""
	I1202 12:52:06.756280   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.756290   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:06.756296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:06.756340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:06.792350   59162 cri.go:89] found id: ""
	I1202 12:52:06.792376   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.792387   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:06.792394   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:06.792450   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:06.827259   59162 cri.go:89] found id: ""
	I1202 12:52:06.827289   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.827301   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:06.827308   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:06.827367   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:06.858775   59162 cri.go:89] found id: ""
	I1202 12:52:06.858795   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.858802   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:06.858811   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:06.858821   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:06.911764   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:06.911795   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:06.925297   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:06.925326   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:06.993703   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:06.993730   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:06.993744   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:07.073657   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:07.073685   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:05.653526   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:08.152177   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:06.164438   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:07.251496   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.752479   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.611640   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:09.626141   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:09.626199   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:09.661406   59162 cri.go:89] found id: ""
	I1202 12:52:09.661425   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.661432   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:09.661439   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:09.661498   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:09.698145   59162 cri.go:89] found id: ""
	I1202 12:52:09.698173   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.698184   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:09.698191   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:09.698252   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:09.732150   59162 cri.go:89] found id: ""
	I1202 12:52:09.732178   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.732189   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:09.732197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:09.732261   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:09.768040   59162 cri.go:89] found id: ""
	I1202 12:52:09.768063   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.768070   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:09.768076   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:09.768130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:09.801038   59162 cri.go:89] found id: ""
	I1202 12:52:09.801064   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.801075   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:09.801082   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:09.801130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:09.841058   59162 cri.go:89] found id: ""
	I1202 12:52:09.841082   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.841089   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:09.841095   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:09.841137   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:09.885521   59162 cri.go:89] found id: ""
	I1202 12:52:09.885541   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.885548   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:09.885554   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:09.885602   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:09.924759   59162 cri.go:89] found id: ""
	I1202 12:52:09.924779   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.924786   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:09.924793   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:09.924804   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:09.968241   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:09.968273   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:10.020282   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:10.020315   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:10.036491   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:10.036519   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:10.113297   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:10.113324   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:10.113339   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:12.688410   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:12.705296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:12.705356   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:12.743097   59162 cri.go:89] found id: ""
	I1202 12:52:12.743119   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.743127   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:12.743133   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:12.743187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:12.778272   59162 cri.go:89] found id: ""
	I1202 12:52:12.778292   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.778299   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:12.778304   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:12.778365   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:12.816087   59162 cri.go:89] found id: ""
	I1202 12:52:12.816116   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.816127   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:12.816134   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:12.816187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:12.850192   59162 cri.go:89] found id: ""
	I1202 12:52:12.850214   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.850221   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:12.850227   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:12.850282   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:12.883325   59162 cri.go:89] found id: ""
	I1202 12:52:12.883351   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.883360   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:12.883367   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:12.883427   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:12.916121   59162 cri.go:89] found id: ""
	I1202 12:52:12.916157   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.916169   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:12.916176   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:12.916251   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:12.946704   59162 cri.go:89] found id: ""
	I1202 12:52:12.946733   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.946746   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:12.946753   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:12.946802   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:12.979010   59162 cri.go:89] found id: ""
	I1202 12:52:12.979041   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.979050   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:12.979062   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:12.979075   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:13.062141   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:13.062171   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:13.111866   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:13.111900   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:13.162470   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:13.162498   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:13.178497   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:13.178525   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:13.245199   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:10.152556   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:12.153087   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.236522   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:12.249938   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:14.750814   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:15.746327   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:15.760092   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:15.760160   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:15.797460   59162 cri.go:89] found id: ""
	I1202 12:52:15.797484   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.797495   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:15.797503   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:15.797563   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:15.829969   59162 cri.go:89] found id: ""
	I1202 12:52:15.829998   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.830009   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:15.830017   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:15.830072   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:15.862390   59162 cri.go:89] found id: ""
	I1202 12:52:15.862418   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.862428   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:15.862435   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:15.862484   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:15.895223   59162 cri.go:89] found id: ""
	I1202 12:52:15.895244   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.895251   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:15.895257   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:15.895311   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:15.933157   59162 cri.go:89] found id: ""
	I1202 12:52:15.933184   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.933192   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:15.933197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:15.933245   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:15.964387   59162 cri.go:89] found id: ""
	I1202 12:52:15.964414   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.964425   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:15.964433   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:15.964487   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:15.996803   59162 cri.go:89] found id: ""
	I1202 12:52:15.996825   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.996832   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:15.996837   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:15.996881   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:16.029364   59162 cri.go:89] found id: ""
	I1202 12:52:16.029394   59162 logs.go:282] 0 containers: []
	W1202 12:52:16.029402   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:16.029411   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:16.029422   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:16.098237   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:16.098264   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:16.098278   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:16.172386   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:16.172414   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:16.216899   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:16.216923   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:16.281565   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:16.281591   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:14.154258   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:16.652807   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:15.316450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:18.388460   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:16.751794   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:19.250295   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:18.796337   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:18.809573   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:18.809637   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:18.847965   59162 cri.go:89] found id: ""
	I1202 12:52:18.847991   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.847999   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:18.848004   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:18.848053   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:18.883714   59162 cri.go:89] found id: ""
	I1202 12:52:18.883741   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.883751   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:18.883758   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:18.883817   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:18.918581   59162 cri.go:89] found id: ""
	I1202 12:52:18.918605   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.918612   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:18.918617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:18.918672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:18.954394   59162 cri.go:89] found id: ""
	I1202 12:52:18.954426   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.954437   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:18.954443   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:18.954502   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:18.995321   59162 cri.go:89] found id: ""
	I1202 12:52:18.995347   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.995355   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:18.995361   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:18.995423   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:19.034030   59162 cri.go:89] found id: ""
	I1202 12:52:19.034055   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.034066   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:19.034073   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:19.034130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:19.073569   59162 cri.go:89] found id: ""
	I1202 12:52:19.073597   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.073609   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:19.073615   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:19.073662   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:19.112049   59162 cri.go:89] found id: ""
	I1202 12:52:19.112078   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.112090   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:19.112100   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:19.112113   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:19.180480   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:19.180502   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:19.180516   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:19.258236   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:19.258264   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:19.299035   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:19.299053   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:19.352572   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:19.352602   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:21.866524   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:21.879286   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:21.879340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:21.910463   59162 cri.go:89] found id: ""
	I1202 12:52:21.910489   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.910498   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:21.910504   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:21.910551   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:21.943130   59162 cri.go:89] found id: ""
	I1202 12:52:21.943157   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.943165   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:21.943171   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:21.943216   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:21.976969   59162 cri.go:89] found id: ""
	I1202 12:52:21.976990   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.976997   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:21.977002   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:21.977055   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:22.022113   59162 cri.go:89] found id: ""
	I1202 12:52:22.022144   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.022153   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:22.022159   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:22.022218   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:22.057387   59162 cri.go:89] found id: ""
	I1202 12:52:22.057406   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.057413   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:22.057418   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:22.057459   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:22.089832   59162 cri.go:89] found id: ""
	I1202 12:52:22.089866   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.089892   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:22.089900   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:22.089960   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:22.121703   59162 cri.go:89] found id: ""
	I1202 12:52:22.121727   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.121735   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:22.121740   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:22.121789   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:22.155076   59162 cri.go:89] found id: ""
	I1202 12:52:22.155098   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.155108   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:22.155117   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:22.155137   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:22.234831   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:22.234862   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:22.273912   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:22.273945   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:22.327932   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:22.327966   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:22.340890   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:22.340913   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:22.419371   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:19.153845   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:21.652993   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:23.653111   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:21.750980   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:24.250791   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:24.919868   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:24.935004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:24.935068   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:24.972438   59162 cri.go:89] found id: ""
	I1202 12:52:24.972466   59162 logs.go:282] 0 containers: []
	W1202 12:52:24.972474   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:24.972480   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:24.972525   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:25.009282   59162 cri.go:89] found id: ""
	I1202 12:52:25.009310   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.009320   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:25.009329   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:25.009391   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:25.043227   59162 cri.go:89] found id: ""
	I1202 12:52:25.043254   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.043262   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:25.043267   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:25.043318   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:25.079167   59162 cri.go:89] found id: ""
	I1202 12:52:25.079191   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.079198   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:25.079204   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:25.079263   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:25.110308   59162 cri.go:89] found id: ""
	I1202 12:52:25.110332   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.110340   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:25.110346   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:25.110388   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:25.143804   59162 cri.go:89] found id: ""
	I1202 12:52:25.143830   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.143840   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:25.143846   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:25.143903   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:25.178114   59162 cri.go:89] found id: ""
	I1202 12:52:25.178140   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.178147   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:25.178155   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:25.178204   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:25.212632   59162 cri.go:89] found id: ""
	I1202 12:52:25.212665   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.212675   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:25.212684   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:25.212696   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:25.267733   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:25.267761   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:25.281025   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:25.281048   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:25.346497   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:25.346520   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:25.346531   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:25.437435   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:25.437469   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:27.979493   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:27.993542   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:27.993615   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:28.030681   59162 cri.go:89] found id: ""
	I1202 12:52:28.030705   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.030712   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:28.030718   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:28.030771   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:28.063991   59162 cri.go:89] found id: ""
	I1202 12:52:28.064019   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.064027   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:28.064032   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:28.064080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:28.097983   59162 cri.go:89] found id: ""
	I1202 12:52:28.098018   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.098029   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:28.098038   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:28.098098   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:28.131956   59162 cri.go:89] found id: ""
	I1202 12:52:28.131977   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.131987   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:28.131995   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:28.132071   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:28.170124   59162 cri.go:89] found id: ""
	I1202 12:52:28.170160   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.170171   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:28.170177   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:28.170238   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:28.203127   59162 cri.go:89] found id: ""
	I1202 12:52:28.203149   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.203157   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:28.203163   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:28.203216   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:28.240056   59162 cri.go:89] found id: ""
	I1202 12:52:28.240081   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.240088   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:28.240094   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:28.240142   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:28.276673   59162 cri.go:89] found id: ""
	I1202 12:52:28.276699   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.276710   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:28.276720   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:28.276733   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:28.333435   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:28.333470   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:28.347465   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:28.347491   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:52:26.153244   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:28.153689   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:27.508437   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:26.250897   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:28.250951   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:30.252183   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	W1202 12:52:28.432745   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:28.432777   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:28.432792   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:28.515984   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:28.516017   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:31.057069   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:31.070021   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:31.070084   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:31.106501   59162 cri.go:89] found id: ""
	I1202 12:52:31.106530   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.106540   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:31.106547   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:31.106606   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:31.141190   59162 cri.go:89] found id: ""
	I1202 12:52:31.141219   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.141230   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:31.141238   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:31.141298   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:31.176050   59162 cri.go:89] found id: ""
	I1202 12:52:31.176077   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.176087   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:31.176099   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:31.176169   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:31.211740   59162 cri.go:89] found id: ""
	I1202 12:52:31.211769   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.211780   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:31.211786   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:31.211831   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:31.248949   59162 cri.go:89] found id: ""
	I1202 12:52:31.248974   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.248983   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:31.248990   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:31.249044   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:31.284687   59162 cri.go:89] found id: ""
	I1202 12:52:31.284709   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.284717   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:31.284723   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:31.284765   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:31.317972   59162 cri.go:89] found id: ""
	I1202 12:52:31.317997   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.318004   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:31.318010   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:31.318065   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:31.354866   59162 cri.go:89] found id: ""
	I1202 12:52:31.354893   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.354904   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:31.354914   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:31.354927   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:31.425168   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:31.425191   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:31.425202   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:31.508169   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:31.508204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:31.547193   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:31.547220   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:31.601864   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:31.601892   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:30.653415   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:33.153132   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:30.580471   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:32.752026   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:35.251960   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:34.115652   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:34.131644   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:34.131695   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:34.174473   59162 cri.go:89] found id: ""
	I1202 12:52:34.174500   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.174510   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:34.174518   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:34.174571   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:34.226162   59162 cri.go:89] found id: ""
	I1202 12:52:34.226190   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.226201   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:34.226208   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:34.226271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:34.269202   59162 cri.go:89] found id: ""
	I1202 12:52:34.269230   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.269240   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:34.269248   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:34.269327   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:34.304571   59162 cri.go:89] found id: ""
	I1202 12:52:34.304604   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.304615   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:34.304621   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:34.304670   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:34.339285   59162 cri.go:89] found id: ""
	I1202 12:52:34.339316   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.339327   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:34.339334   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:34.339401   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:34.374919   59162 cri.go:89] found id: ""
	I1202 12:52:34.374952   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.374964   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:34.374973   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:34.375035   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:34.409292   59162 cri.go:89] found id: ""
	I1202 12:52:34.409319   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.409330   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:34.409337   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:34.409404   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:34.442536   59162 cri.go:89] found id: ""
	I1202 12:52:34.442561   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.442568   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:34.442576   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:34.442587   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:34.494551   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:34.494582   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:34.508684   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:34.508713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:34.572790   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:34.572816   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:34.572835   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:34.649327   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:34.649358   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:37.190648   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:37.203913   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:37.203966   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:37.243165   59162 cri.go:89] found id: ""
	I1202 12:52:37.243186   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.243194   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:37.243199   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:37.243246   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:37.279317   59162 cri.go:89] found id: ""
	I1202 12:52:37.279343   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.279351   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:37.279356   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:37.279411   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:37.312655   59162 cri.go:89] found id: ""
	I1202 12:52:37.312684   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.312693   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:37.312702   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:37.312748   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:37.346291   59162 cri.go:89] found id: ""
	I1202 12:52:37.346319   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.346328   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:37.346334   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:37.346382   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:37.381534   59162 cri.go:89] found id: ""
	I1202 12:52:37.381555   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.381563   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:37.381569   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:37.381621   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:37.416990   59162 cri.go:89] found id: ""
	I1202 12:52:37.417013   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.417020   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:37.417026   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:37.417083   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:37.451149   59162 cri.go:89] found id: ""
	I1202 12:52:37.451174   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.451182   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:37.451187   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:37.451233   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:37.485902   59162 cri.go:89] found id: ""
	I1202 12:52:37.485929   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.485940   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:37.485950   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:37.485970   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:37.541615   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:37.541645   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:37.554846   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:37.554866   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:37.622432   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:37.622457   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:37.622471   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:37.708793   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:37.708832   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:35.154170   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:37.653220   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:36.660437   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:37.751726   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:40.252016   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:40.246822   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:40.260893   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:40.260959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:40.294743   59162 cri.go:89] found id: ""
	I1202 12:52:40.294773   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.294782   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:40.294789   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:40.294845   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:40.338523   59162 cri.go:89] found id: ""
	I1202 12:52:40.338557   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.338570   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:40.338577   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:40.338628   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:40.373134   59162 cri.go:89] found id: ""
	I1202 12:52:40.373162   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.373170   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:40.373176   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:40.373225   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:40.410197   59162 cri.go:89] found id: ""
	I1202 12:52:40.410233   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.410247   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:40.410256   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:40.410333   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:40.442497   59162 cri.go:89] found id: ""
	I1202 12:52:40.442521   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.442530   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:40.442536   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:40.442597   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:40.477835   59162 cri.go:89] found id: ""
	I1202 12:52:40.477863   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.477872   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:40.477879   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:40.477936   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:40.511523   59162 cri.go:89] found id: ""
	I1202 12:52:40.511547   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.511559   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:40.511567   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:40.511628   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:40.545902   59162 cri.go:89] found id: ""
	I1202 12:52:40.545928   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.545942   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:40.545962   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:40.545976   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:40.595638   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:40.595669   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:40.609023   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:40.609043   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:40.680826   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:40.680848   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:40.680866   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:40.756551   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:40.756579   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:43.295761   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:43.308764   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:43.308836   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:43.343229   59162 cri.go:89] found id: ""
	I1202 12:52:43.343258   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.343268   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:43.343276   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:43.343335   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:39.653604   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:42.152871   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:39.732455   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:42.750873   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:45.250740   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:43.376841   59162 cri.go:89] found id: ""
	I1202 12:52:43.376861   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.376868   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:43.376874   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:43.376918   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:43.415013   59162 cri.go:89] found id: ""
	I1202 12:52:43.415033   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.415041   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:43.415046   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:43.415094   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:43.451563   59162 cri.go:89] found id: ""
	I1202 12:52:43.451590   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.451601   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:43.451608   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:43.451658   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:43.492838   59162 cri.go:89] found id: ""
	I1202 12:52:43.492859   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.492867   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:43.492872   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:43.492934   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:43.531872   59162 cri.go:89] found id: ""
	I1202 12:52:43.531898   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.531908   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:43.531914   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:43.531957   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:43.566235   59162 cri.go:89] found id: ""
	I1202 12:52:43.566260   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.566270   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:43.566277   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:43.566332   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:43.601502   59162 cri.go:89] found id: ""
	I1202 12:52:43.601531   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.601542   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:43.601553   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:43.601567   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:43.650984   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:43.651012   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:43.664273   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:43.664296   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:43.735791   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:43.735819   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:43.735833   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:43.817824   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:43.817861   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:46.356130   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:46.368755   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:46.368835   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:46.404552   59162 cri.go:89] found id: ""
	I1202 12:52:46.404574   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.404582   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:46.404588   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:46.404640   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:46.438292   59162 cri.go:89] found id: ""
	I1202 12:52:46.438318   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.438329   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:46.438337   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:46.438397   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:46.471614   59162 cri.go:89] found id: ""
	I1202 12:52:46.471636   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.471643   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:46.471649   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:46.471752   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:46.502171   59162 cri.go:89] found id: ""
	I1202 12:52:46.502193   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.502201   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:46.502207   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:46.502250   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:46.533820   59162 cri.go:89] found id: ""
	I1202 12:52:46.533842   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.533851   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:46.533859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:46.533914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:46.566891   59162 cri.go:89] found id: ""
	I1202 12:52:46.566918   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.566928   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:46.566936   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:46.566980   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:46.599112   59162 cri.go:89] found id: ""
	I1202 12:52:46.599143   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.599154   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:46.599161   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:46.599215   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:46.630794   59162 cri.go:89] found id: ""
	I1202 12:52:46.630837   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.630849   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:46.630860   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:46.630876   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:46.644180   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:46.644210   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:46.705881   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:46.705921   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:46.705936   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:46.781327   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:46.781359   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:46.820042   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:46.820072   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:44.654330   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:47.152273   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:45.816427   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:48.884464   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:47.751118   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:49.752726   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:49.368930   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:49.381506   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:49.381556   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:49.417928   59162 cri.go:89] found id: ""
	I1202 12:52:49.417955   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.417965   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:49.417977   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:49.418034   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:49.450248   59162 cri.go:89] found id: ""
	I1202 12:52:49.450276   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.450286   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:49.450295   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:49.450366   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:49.484288   59162 cri.go:89] found id: ""
	I1202 12:52:49.484311   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.484318   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:49.484323   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:49.484372   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:49.518565   59162 cri.go:89] found id: ""
	I1202 12:52:49.518585   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.518595   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:49.518602   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:49.518650   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:49.552524   59162 cri.go:89] found id: ""
	I1202 12:52:49.552549   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.552556   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:49.552561   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:49.552609   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:49.586570   59162 cri.go:89] found id: ""
	I1202 12:52:49.586599   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.586610   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:49.586617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:49.586672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:49.622561   59162 cri.go:89] found id: ""
	I1202 12:52:49.622590   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.622601   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:49.622609   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:49.622666   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:49.659092   59162 cri.go:89] found id: ""
	I1202 12:52:49.659117   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.659129   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:49.659152   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:49.659170   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:49.672461   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:49.672491   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:49.738609   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:49.738637   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:49.738670   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:49.820458   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:49.820488   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:49.860240   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:49.860269   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:52.411571   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:52.425037   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:52.425106   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:52.458215   59162 cri.go:89] found id: ""
	I1202 12:52:52.458244   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.458255   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:52.458262   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:52.458316   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:52.491781   59162 cri.go:89] found id: ""
	I1202 12:52:52.491809   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.491820   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:52.491827   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:52.491879   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:52.528829   59162 cri.go:89] found id: ""
	I1202 12:52:52.528855   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.528864   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:52.528870   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:52.528914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:52.560930   59162 cri.go:89] found id: ""
	I1202 12:52:52.560957   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.560965   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:52.560971   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:52.561021   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:52.594102   59162 cri.go:89] found id: ""
	I1202 12:52:52.594139   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.594152   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:52.594160   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:52.594222   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:52.627428   59162 cri.go:89] found id: ""
	I1202 12:52:52.627452   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.627460   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:52.627465   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:52.627529   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:52.659143   59162 cri.go:89] found id: ""
	I1202 12:52:52.659167   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.659175   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:52.659180   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:52.659230   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:52.691603   59162 cri.go:89] found id: ""
	I1202 12:52:52.691625   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.691632   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:52.691640   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:52.691651   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:52.741989   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:52.742016   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:52.755769   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:52.755790   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:52.826397   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:52.826418   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:52.826431   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:52.904705   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:52.904734   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:49.653476   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:52.152372   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:51.755127   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:54.252182   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:55.449363   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:55.462294   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:55.462350   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:55.500829   59162 cri.go:89] found id: ""
	I1202 12:52:55.500856   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.500865   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:55.500871   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:55.500927   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:55.533890   59162 cri.go:89] found id: ""
	I1202 12:52:55.533920   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.533931   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:55.533942   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:55.533998   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:55.566686   59162 cri.go:89] found id: ""
	I1202 12:52:55.566715   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.566725   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:55.566736   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:55.566790   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:55.598330   59162 cri.go:89] found id: ""
	I1202 12:52:55.598357   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.598367   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:55.598374   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:55.598429   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:55.630648   59162 cri.go:89] found id: ""
	I1202 12:52:55.630676   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.630686   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:55.630694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:55.630755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:55.664611   59162 cri.go:89] found id: ""
	I1202 12:52:55.664633   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.664640   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:55.664645   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:55.664687   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:55.697762   59162 cri.go:89] found id: ""
	I1202 12:52:55.697789   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.697797   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:55.697803   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:55.697853   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:55.735239   59162 cri.go:89] found id: ""
	I1202 12:52:55.735263   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.735271   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:55.735279   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:55.735292   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:55.805187   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:55.805217   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:55.805233   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:55.888420   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:55.888452   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:55.927535   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:55.927561   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:55.976883   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:55.976909   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:54.152753   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:56.154364   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.654202   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:54.968436   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:58.036631   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:56.750816   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.752427   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.490700   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:58.504983   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:58.505053   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:58.541332   59162 cri.go:89] found id: ""
	I1202 12:52:58.541352   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.541359   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:58.541365   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:58.541409   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:58.579437   59162 cri.go:89] found id: ""
	I1202 12:52:58.579459   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.579466   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:58.579472   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:58.579521   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:58.617374   59162 cri.go:89] found id: ""
	I1202 12:52:58.617406   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.617417   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:58.617425   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:58.617486   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:58.653242   59162 cri.go:89] found id: ""
	I1202 12:52:58.653269   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.653280   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:58.653287   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:58.653345   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:58.686171   59162 cri.go:89] found id: ""
	I1202 12:52:58.686201   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.686210   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:58.686215   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:58.686262   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:58.719934   59162 cri.go:89] found id: ""
	I1202 12:52:58.719956   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.719966   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:58.719974   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:58.720030   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:58.759587   59162 cri.go:89] found id: ""
	I1202 12:52:58.759610   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.759619   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:58.759626   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:58.759678   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:58.790885   59162 cri.go:89] found id: ""
	I1202 12:52:58.790908   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.790915   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:58.790922   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:58.790934   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:58.840192   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:58.840220   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:58.853639   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:58.853663   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:58.924643   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:58.924669   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:58.924679   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:59.013916   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:59.013945   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:01.552305   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:01.565577   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:01.565642   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:01.598261   59162 cri.go:89] found id: ""
	I1202 12:53:01.598294   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.598304   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:01.598310   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:01.598377   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:01.631527   59162 cri.go:89] found id: ""
	I1202 12:53:01.631556   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.631565   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:01.631570   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:01.631631   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:01.670788   59162 cri.go:89] found id: ""
	I1202 12:53:01.670812   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.670820   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:01.670826   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:01.670880   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:01.708801   59162 cri.go:89] found id: ""
	I1202 12:53:01.708828   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.708838   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:01.708846   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:01.708914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:01.746053   59162 cri.go:89] found id: ""
	I1202 12:53:01.746074   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.746083   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:01.746120   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:01.746184   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:01.780873   59162 cri.go:89] found id: ""
	I1202 12:53:01.780894   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.780901   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:01.780907   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:01.780951   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:01.817234   59162 cri.go:89] found id: ""
	I1202 12:53:01.817259   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.817269   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:01.817276   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:01.817335   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:01.850277   59162 cri.go:89] found id: ""
	I1202 12:53:01.850302   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.850317   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:01.850327   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:01.850342   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:01.933014   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:01.933055   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:01.971533   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:01.971562   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:02.020280   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:02.020311   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:02.034786   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:02.034814   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:02.104013   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:01.152305   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:03.153925   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:01.250308   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:03.250937   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:05.751259   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:04.604595   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:04.618004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:04.618057   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:04.651388   59162 cri.go:89] found id: ""
	I1202 12:53:04.651414   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.651428   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:04.651436   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:04.651495   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:04.686973   59162 cri.go:89] found id: ""
	I1202 12:53:04.686998   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.687005   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:04.687019   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:04.687063   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:04.720630   59162 cri.go:89] found id: ""
	I1202 12:53:04.720654   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.720661   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:04.720667   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:04.720724   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:04.754657   59162 cri.go:89] found id: ""
	I1202 12:53:04.754682   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.754689   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:04.754694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:04.754746   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:04.787583   59162 cri.go:89] found id: ""
	I1202 12:53:04.787611   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.787621   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:04.787628   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:04.787686   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:04.818962   59162 cri.go:89] found id: ""
	I1202 12:53:04.818988   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.818999   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:04.819006   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:04.819059   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:04.852015   59162 cri.go:89] found id: ""
	I1202 12:53:04.852035   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.852042   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:04.852047   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:04.852097   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:04.886272   59162 cri.go:89] found id: ""
	I1202 12:53:04.886294   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.886301   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:04.886309   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:04.886320   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:04.934682   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:04.934712   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:04.947889   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:04.947911   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:05.018970   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:05.018995   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:05.019010   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:05.098203   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:05.098233   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:07.637320   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:07.650643   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:07.650706   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:07.683468   59162 cri.go:89] found id: ""
	I1202 12:53:07.683491   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.683499   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:07.683504   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:07.683565   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:07.719765   59162 cri.go:89] found id: ""
	I1202 12:53:07.719792   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.719799   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:07.719805   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:07.719855   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:07.760939   59162 cri.go:89] found id: ""
	I1202 12:53:07.760986   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.760996   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:07.761004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:07.761066   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:07.799175   59162 cri.go:89] found id: ""
	I1202 12:53:07.799219   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.799231   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:07.799239   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:07.799300   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:07.831957   59162 cri.go:89] found id: ""
	I1202 12:53:07.831987   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.831999   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:07.832007   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:07.832067   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:07.865982   59162 cri.go:89] found id: ""
	I1202 12:53:07.866008   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.866015   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:07.866022   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:07.866080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:07.903443   59162 cri.go:89] found id: ""
	I1202 12:53:07.903467   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.903477   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:07.903484   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:07.903541   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:07.939268   59162 cri.go:89] found id: ""
	I1202 12:53:07.939293   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.939300   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:07.939310   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:07.939324   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:07.952959   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:07.952984   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:08.039178   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:08.039207   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:08.039223   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:08.121432   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:08.121469   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:08.164739   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:08.164767   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:05.652537   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:07.652894   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:04.116377   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:07.188477   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:08.250489   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:10.250657   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:10.718599   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:10.731079   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:10.731154   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:10.767605   59162 cri.go:89] found id: ""
	I1202 12:53:10.767626   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.767633   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:10.767639   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:10.767689   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:10.800464   59162 cri.go:89] found id: ""
	I1202 12:53:10.800483   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.800491   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:10.800496   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:10.800554   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:10.840808   59162 cri.go:89] found id: ""
	I1202 12:53:10.840836   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.840853   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:10.840859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:10.840922   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:10.877653   59162 cri.go:89] found id: ""
	I1202 12:53:10.877681   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.877690   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:10.877698   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:10.877755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:10.915849   59162 cri.go:89] found id: ""
	I1202 12:53:10.915873   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.915883   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:10.915891   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:10.915953   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:10.948652   59162 cri.go:89] found id: ""
	I1202 12:53:10.948680   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.948691   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:10.948697   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:10.948755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:10.983126   59162 cri.go:89] found id: ""
	I1202 12:53:10.983154   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.983165   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:10.983172   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:10.983232   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:11.015350   59162 cri.go:89] found id: ""
	I1202 12:53:11.015378   59162 logs.go:282] 0 containers: []
	W1202 12:53:11.015390   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:11.015400   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:11.015414   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:11.028713   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:11.028737   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:11.095904   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:11.095932   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:11.095950   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:11.179078   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:11.179114   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:11.216075   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:11.216106   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:09.653482   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:12.152117   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:13.272450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:12.750358   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:14.751316   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:13.774975   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:13.787745   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:13.787804   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:13.821793   59162 cri.go:89] found id: ""
	I1202 12:53:13.821824   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.821834   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:13.821840   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:13.821885   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:13.854831   59162 cri.go:89] found id: ""
	I1202 12:53:13.854855   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.854864   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:13.854871   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:13.854925   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:13.885113   59162 cri.go:89] found id: ""
	I1202 12:53:13.885142   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.885149   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:13.885155   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:13.885201   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:13.915811   59162 cri.go:89] found id: ""
	I1202 12:53:13.915841   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.915851   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:13.915859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:13.915914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:13.948908   59162 cri.go:89] found id: ""
	I1202 12:53:13.948936   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.948946   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:13.948953   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:13.949016   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:13.986502   59162 cri.go:89] found id: ""
	I1202 12:53:13.986531   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.986540   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:13.986548   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:13.986607   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:14.018182   59162 cri.go:89] found id: ""
	I1202 12:53:14.018210   59162 logs.go:282] 0 containers: []
	W1202 12:53:14.018221   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:14.018229   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:14.018287   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:14.054185   59162 cri.go:89] found id: ""
	I1202 12:53:14.054221   59162 logs.go:282] 0 containers: []
	W1202 12:53:14.054233   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:14.054244   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:14.054272   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:14.131353   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:14.131381   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:14.131402   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:14.212787   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:14.212822   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:14.254043   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:14.254073   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:14.309591   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:14.309620   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:16.824827   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:16.838150   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:16.838210   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:16.871550   59162 cri.go:89] found id: ""
	I1202 12:53:16.871570   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.871577   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:16.871582   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:16.871625   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:16.908736   59162 cri.go:89] found id: ""
	I1202 12:53:16.908766   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.908775   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:16.908781   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:16.908844   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:16.941404   59162 cri.go:89] found id: ""
	I1202 12:53:16.941427   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.941437   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:16.941444   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:16.941500   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:16.971984   59162 cri.go:89] found id: ""
	I1202 12:53:16.972011   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.972023   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:16.972030   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:16.972079   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:17.004573   59162 cri.go:89] found id: ""
	I1202 12:53:17.004596   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.004607   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:17.004614   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:17.004661   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:17.037171   59162 cri.go:89] found id: ""
	I1202 12:53:17.037199   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.037210   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:17.037218   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:17.037271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:17.070862   59162 cri.go:89] found id: ""
	I1202 12:53:17.070888   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.070899   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:17.070906   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:17.070959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:17.102642   59162 cri.go:89] found id: ""
	I1202 12:53:17.102668   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.102678   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:17.102688   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:17.102701   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:17.182590   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:17.182623   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:17.224313   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:17.224346   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:17.272831   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:17.272855   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:17.286217   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:17.286240   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:17.357274   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:14.153570   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:16.651955   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:18.654103   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:16.340429   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:17.252036   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:19.751295   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:19.858294   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:19.871731   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:19.871787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:19.906270   59162 cri.go:89] found id: ""
	I1202 12:53:19.906290   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.906297   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:19.906303   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:19.906345   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:19.937769   59162 cri.go:89] found id: ""
	I1202 12:53:19.937790   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.937797   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:19.937802   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:19.937845   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:19.971667   59162 cri.go:89] found id: ""
	I1202 12:53:19.971689   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.971706   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:19.971714   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:19.971787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:20.005434   59162 cri.go:89] found id: ""
	I1202 12:53:20.005455   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.005461   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:20.005467   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:20.005512   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:20.041817   59162 cri.go:89] found id: ""
	I1202 12:53:20.041839   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.041848   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:20.041856   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:20.041906   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:20.073923   59162 cri.go:89] found id: ""
	I1202 12:53:20.073946   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.073958   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:20.073966   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:20.074026   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:20.107360   59162 cri.go:89] found id: ""
	I1202 12:53:20.107398   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.107409   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:20.107416   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:20.107479   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:20.153919   59162 cri.go:89] found id: ""
	I1202 12:53:20.153942   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.153952   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:20.153963   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:20.153977   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:20.211581   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:20.211610   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:20.227589   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:20.227615   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:20.305225   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:20.305250   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:20.305265   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:20.382674   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:20.382713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:22.924662   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:22.940038   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:22.940101   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:22.984768   59162 cri.go:89] found id: ""
	I1202 12:53:22.984795   59162 logs.go:282] 0 containers: []
	W1202 12:53:22.984806   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:22.984815   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:22.984876   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:23.024159   59162 cri.go:89] found id: ""
	I1202 12:53:23.024180   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.024188   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:23.024194   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:23.024254   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:23.059929   59162 cri.go:89] found id: ""
	I1202 12:53:23.059948   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.059956   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:23.059961   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:23.060003   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:23.093606   59162 cri.go:89] found id: ""
	I1202 12:53:23.093627   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.093633   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:23.093639   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:23.093689   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:23.127868   59162 cri.go:89] found id: ""
	I1202 12:53:23.127893   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.127904   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:23.127910   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:23.127965   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:23.164988   59162 cri.go:89] found id: ""
	I1202 12:53:23.165006   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.165013   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:23.165018   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:23.165058   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:23.196389   59162 cri.go:89] found id: ""
	I1202 12:53:23.196412   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.196423   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:23.196430   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:23.196481   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:23.229337   59162 cri.go:89] found id: ""
	I1202 12:53:23.229358   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.229366   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:23.229376   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:23.229404   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:23.284041   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:23.284066   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:23.297861   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:23.297884   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:53:21.152126   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:23.154090   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:22.420399   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:22.250790   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:24.252122   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	W1202 12:53:23.364113   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:23.364131   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:23.364142   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:23.446244   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:23.446273   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:25.986668   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:25.998953   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:25.999013   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:26.034844   59162 cri.go:89] found id: ""
	I1202 12:53:26.034868   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.034876   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:26.034883   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:26.034938   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:26.067050   59162 cri.go:89] found id: ""
	I1202 12:53:26.067076   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.067083   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:26.067089   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:26.067152   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:26.098705   59162 cri.go:89] found id: ""
	I1202 12:53:26.098735   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.098746   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:26.098754   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:26.098812   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:26.131283   59162 cri.go:89] found id: ""
	I1202 12:53:26.131312   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.131321   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:26.131327   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:26.131379   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:26.164905   59162 cri.go:89] found id: ""
	I1202 12:53:26.164933   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.164943   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:26.164950   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:26.165009   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:26.196691   59162 cri.go:89] found id: ""
	I1202 12:53:26.196715   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.196724   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:26.196732   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:26.196789   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:26.227341   59162 cri.go:89] found id: ""
	I1202 12:53:26.227364   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.227374   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:26.227380   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:26.227436   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:26.260569   59162 cri.go:89] found id: ""
	I1202 12:53:26.260589   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.260597   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:26.260606   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:26.260619   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:26.313150   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:26.313175   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:26.327732   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:26.327762   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:26.392748   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:26.392768   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:26.392778   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:26.474456   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:26.474484   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:24.146771   58902 pod_ready.go:82] duration metric: took 4m0.000100995s for pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace to be "Ready" ...
	E1202 12:53:24.146796   58902 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace to be "Ready" (will not retry!)
	I1202 12:53:24.146811   58902 pod_ready.go:39] duration metric: took 4m6.027386938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:53:24.146852   58902 kubeadm.go:597] duration metric: took 4m15.570212206s to restartPrimaryControlPlane
	W1202 12:53:24.146901   58902 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 12:53:24.146926   58902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:53:25.492478   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:26.253906   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:28.752313   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:29.018514   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:29.032328   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:29.032457   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:29.067696   59162 cri.go:89] found id: ""
	I1202 12:53:29.067720   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.067732   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:29.067738   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:29.067794   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:29.101076   59162 cri.go:89] found id: ""
	I1202 12:53:29.101096   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.101103   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:29.101108   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:29.101150   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:29.136446   59162 cri.go:89] found id: ""
	I1202 12:53:29.136473   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.136483   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:29.136489   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:29.136552   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:29.170820   59162 cri.go:89] found id: ""
	I1202 12:53:29.170849   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.170860   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:29.170868   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:29.170931   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:29.205972   59162 cri.go:89] found id: ""
	I1202 12:53:29.206001   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.206012   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:29.206020   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:29.206086   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:29.242118   59162 cri.go:89] found id: ""
	I1202 12:53:29.242155   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.242165   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:29.242172   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:29.242222   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:29.281377   59162 cri.go:89] found id: ""
	I1202 12:53:29.281405   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.281417   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:29.281426   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:29.281487   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:29.316350   59162 cri.go:89] found id: ""
	I1202 12:53:29.316381   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.316393   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:29.316404   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:29.316418   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:29.392609   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:29.392648   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:29.430777   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:29.430804   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:29.484157   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:29.484190   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:29.498434   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:29.498457   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:29.568203   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:32.069043   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:32.081796   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:32.081867   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:32.115767   59162 cri.go:89] found id: ""
	I1202 12:53:32.115789   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.115797   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:32.115802   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:32.115861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:32.145962   59162 cri.go:89] found id: ""
	I1202 12:53:32.145984   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.145992   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:32.145999   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:32.146046   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:32.177709   59162 cri.go:89] found id: ""
	I1202 12:53:32.177734   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.177744   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:32.177752   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:32.177796   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:32.211897   59162 cri.go:89] found id: ""
	I1202 12:53:32.211921   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.211930   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:32.211937   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:32.211994   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:32.244401   59162 cri.go:89] found id: ""
	I1202 12:53:32.244425   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.244434   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:32.244442   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:32.244503   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:32.278097   59162 cri.go:89] found id: ""
	I1202 12:53:32.278123   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.278140   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:32.278151   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:32.278210   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:32.312740   59162 cri.go:89] found id: ""
	I1202 12:53:32.312774   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.312785   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:32.312793   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:32.312860   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:32.345849   59162 cri.go:89] found id: ""
	I1202 12:53:32.345878   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.345889   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:32.345901   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:32.345917   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:32.395961   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:32.395998   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:32.409582   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:32.409609   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:32.473717   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:32.473746   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:32.473763   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:32.548547   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:32.548580   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:31.572430   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:31.251492   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:33.251616   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:35.750762   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:35.088628   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:35.102152   59162 kubeadm.go:597] duration metric: took 4m2.014751799s to restartPrimaryControlPlane
	W1202 12:53:35.102217   59162 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 12:53:35.102244   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:53:36.768528   59162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.666262663s)
	I1202 12:53:36.768601   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:53:36.783104   59162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:53:36.792966   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:53:36.802188   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:53:36.802205   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:53:36.802234   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:53:36.811253   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:53:36.811290   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:53:36.820464   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:53:36.829386   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:53:36.829426   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:53:36.838814   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:53:36.847241   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:53:36.847272   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:53:36.856295   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:53:36.864892   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:53:36.864929   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:53:36.873699   59162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:53:37.076297   59162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:53:34.644489   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:38.250676   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:40.250779   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:40.724427   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:43.796493   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:42.251341   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:44.751292   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:50.547760   58902 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.400809303s)
	I1202 12:53:50.547840   58902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:53:50.564051   58902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:53:50.573674   58902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:53:50.582945   58902 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:53:50.582965   58902 kubeadm.go:157] found existing configuration files:
	
	I1202 12:53:50.582998   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:53:50.591979   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:53:50.592030   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:53:50.601043   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:53:50.609896   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:53:50.609945   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:53:50.618918   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:53:50.627599   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:53:50.627634   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:53:50.636459   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:53:50.644836   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:53:50.644880   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:53:50.653742   58902 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:53:50.698104   58902 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 12:53:50.698187   58902 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:53:50.811202   58902 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:53:50.811340   58902 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:53:50.811466   58902 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 12:53:50.822002   58902 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:53:47.252492   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:49.750168   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:50.823836   58902 out.go:235]   - Generating certificates and keys ...
	I1202 12:53:50.823933   58902 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:53:50.824031   58902 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:53:50.824141   58902 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:53:50.824223   58902 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:53:50.824328   58902 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:53:50.824402   58902 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:53:50.824500   58902 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:53:50.824583   58902 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:53:50.824697   58902 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:53:50.824826   58902 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:53:50.824896   58902 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:53:50.824984   58902 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:53:50.912363   58902 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:53:50.997719   58902 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 12:53:51.181182   58902 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:53:51.424413   58902 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:53:51.526033   58902 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:53:51.526547   58902 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:53:51.528947   58902 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:53:51.530665   58902 out.go:235]   - Booting up control plane ...
	I1202 12:53:51.530761   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:53:51.530862   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:53:51.530946   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:53:51.551867   58902 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:53:51.557869   58902 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:53:51.557960   58902 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:53:51.690048   58902 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 12:53:51.690190   58902 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 12:53:52.190616   58902 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.56624ms
	I1202 12:53:52.190735   58902 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 12:53:49.876477   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:52.948470   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:51.752318   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:54.250701   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:57.192620   58902 kubeadm.go:310] [api-check] The API server is healthy after 5.001974319s
	I1202 12:53:57.205108   58902 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 12:53:57.217398   58902 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 12:53:57.241642   58902 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 12:53:57.241842   58902 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-953044 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 12:53:57.252962   58902 kubeadm.go:310] [bootstrap-token] Using token: kqbw67.r50dkuvxntafmbtm
	I1202 12:53:57.254175   58902 out.go:235]   - Configuring RBAC rules ...
	I1202 12:53:57.254282   58902 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 12:53:57.258707   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 12:53:57.265127   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 12:53:57.268044   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 12:53:57.273630   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 12:53:57.276921   58902 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 12:53:57.598936   58902 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 12:53:58.031759   58902 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 12:53:58.598943   58902 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 12:53:58.599838   58902 kubeadm.go:310] 
	I1202 12:53:58.599900   58902 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 12:53:58.599927   58902 kubeadm.go:310] 
	I1202 12:53:58.600020   58902 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 12:53:58.600031   58902 kubeadm.go:310] 
	I1202 12:53:58.600067   58902 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 12:53:58.600150   58902 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 12:53:58.600249   58902 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 12:53:58.600266   58902 kubeadm.go:310] 
	I1202 12:53:58.600343   58902 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 12:53:58.600353   58902 kubeadm.go:310] 
	I1202 12:53:58.600418   58902 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 12:53:58.600429   58902 kubeadm.go:310] 
	I1202 12:53:58.600500   58902 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 12:53:58.600602   58902 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 12:53:58.600694   58902 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 12:53:58.600704   58902 kubeadm.go:310] 
	I1202 12:53:58.600878   58902 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 12:53:58.600996   58902 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 12:53:58.601008   58902 kubeadm.go:310] 
	I1202 12:53:58.601121   58902 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kqbw67.r50dkuvxntafmbtm \
	I1202 12:53:58.601248   58902 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 12:53:58.601281   58902 kubeadm.go:310] 	--control-plane 
	I1202 12:53:58.601298   58902 kubeadm.go:310] 
	I1202 12:53:58.601437   58902 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 12:53:58.601451   58902 kubeadm.go:310] 
	I1202 12:53:58.601570   58902 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kqbw67.r50dkuvxntafmbtm \
	I1202 12:53:58.601726   58902 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 12:53:58.601878   58902 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:53:58.602090   58902 cni.go:84] Creating CNI manager for ""
	I1202 12:53:58.602108   58902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:53:58.603597   58902 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 12:53:58.604832   58902 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 12:53:58.616597   58902 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 12:53:58.633585   58902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 12:53:58.633639   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:58.633694   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-953044 minikube.k8s.io/updated_at=2024_12_02T12_53_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=embed-certs-953044 minikube.k8s.io/primary=true
	I1202 12:53:58.843567   58902 ops.go:34] apiserver oom_adj: -16
	I1202 12:53:58.843643   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:56.252079   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:58.750596   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:59.344179   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:59.844667   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:00.343766   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:00.843808   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:01.343992   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:01.843750   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:02.344088   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:02.431425   58902 kubeadm.go:1113] duration metric: took 3.797838401s to wait for elevateKubeSystemPrivileges
	I1202 12:54:02.431466   58902 kubeadm.go:394] duration metric: took 4m53.907154853s to StartCluster
	I1202 12:54:02.431488   58902 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:54:02.431574   58902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:54:02.433388   58902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:54:02.433759   58902 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 12:54:02.433844   58902 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 12:54:02.433961   58902 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-953044"
	I1202 12:54:02.433979   58902 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-953044"
	I1202 12:54:02.433978   58902 config.go:182] Loaded profile config "embed-certs-953044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:54:02.433983   58902 addons.go:69] Setting default-storageclass=true in profile "embed-certs-953044"
	I1202 12:54:02.434009   58902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-953044"
	I1202 12:54:02.433983   58902 addons.go:69] Setting metrics-server=true in profile "embed-certs-953044"
	I1202 12:54:02.434082   58902 addons.go:234] Setting addon metrics-server=true in "embed-certs-953044"
	W1202 12:54:02.434090   58902 addons.go:243] addon metrics-server should already be in state true
	I1202 12:54:02.434121   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	W1202 12:54:02.433990   58902 addons.go:243] addon storage-provisioner should already be in state true
	I1202 12:54:02.434195   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	I1202 12:54:02.434500   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434544   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.434550   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434566   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434589   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.434606   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.435408   58902 out.go:177] * Verifying Kubernetes components...
	I1202 12:54:02.436893   58902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:54:02.450113   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I1202 12:54:02.450620   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.451022   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.451047   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.451376   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.451545   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.454345   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38059
	I1202 12:54:02.454346   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I1202 12:54:02.454788   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.454832   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.455251   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.455268   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.455281   58902 addons.go:234] Setting addon default-storageclass=true in "embed-certs-953044"
	W1202 12:54:02.455303   58902 addons.go:243] addon default-storageclass should already be in state true
	I1202 12:54:02.455336   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	I1202 12:54:02.455286   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.455377   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.455570   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.455696   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.455708   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.455739   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.456068   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.456085   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.456105   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.456122   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.470558   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I1202 12:54:02.470761   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I1202 12:54:02.470971   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471035   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43157
	I1202 12:54:02.471142   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471406   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.471426   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.471494   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471620   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.471633   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.471955   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472019   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.472035   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.472110   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.472127   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472446   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472647   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.472685   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.472721   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.474380   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.474597   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.476328   58902 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1202 12:54:02.476338   58902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:54:02.477992   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 12:54:02.478008   58902 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 12:54:02.478022   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.478549   58902 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 12:54:02.478567   58902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 12:54:02.478584   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.481364   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.481698   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.481725   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.481956   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.482008   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.482150   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.482274   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.482417   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.482503   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.482521   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.482785   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.483079   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.483352   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.483478   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.489285   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I1202 12:54:02.489644   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.490064   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.490085   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.490346   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.490510   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.491774   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.491961   58902 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 12:54:02.491974   58902 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 12:54:02.491990   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.494680   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.495069   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.495098   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.495259   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.495392   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.495582   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.495700   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.626584   58902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:54:02.650914   58902 node_ready.go:35] waiting up to 6m0s for node "embed-certs-953044" to be "Ready" ...
	I1202 12:54:02.658909   58902 node_ready.go:49] node "embed-certs-953044" has status "Ready":"True"
	I1202 12:54:02.658931   58902 node_ready.go:38] duration metric: took 7.986729ms for node "embed-certs-953044" to be "Ready" ...
	I1202 12:54:02.658939   58902 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:02.663878   58902 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:02.708572   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 12:54:02.711794   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 12:54:02.711813   58902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1202 12:54:02.729787   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 12:54:02.760573   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 12:54:02.760595   58902 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 12:54:02.814731   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 12:54:02.814756   58902 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 12:54:02.867045   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 12:54:03.549497   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.549532   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.549914   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.549970   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.549999   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.550010   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.550032   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.550256   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.550360   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.550336   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.551311   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.551333   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.551629   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.551591   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.551670   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.551686   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.551694   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.551907   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.552278   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.552295   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.577295   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.577322   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.577618   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.577631   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.577647   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.835721   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.835752   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.836073   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.836092   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.836108   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.836118   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.836460   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.836478   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.836489   58902 addons.go:475] Verifying addon metrics-server=true in "embed-certs-953044"
	I1202 12:54:03.836492   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.838858   58902 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1202 12:54:03.840263   58902 addons.go:510] duration metric: took 1.406440873s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1202 12:53:59.032460   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:02.100433   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:01.251084   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:03.252024   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:05.752273   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:04.669768   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:07.171770   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:08.180411   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:08.251624   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:10.751482   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:09.670413   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:10.669602   58902 pod_ready.go:93] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.669624   58902 pod_ready.go:82] duration metric: took 8.00571576s for pod "etcd-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.669634   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.674276   58902 pod_ready.go:93] pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.674293   58902 pod_ready.go:82] duration metric: took 4.652882ms for pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.674301   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.678330   58902 pod_ready.go:93] pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.678346   58902 pod_ready.go:82] duration metric: took 4.037883ms for pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.678354   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:12.184565   58902 pod_ready.go:93] pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:12.184591   58902 pod_ready.go:82] duration metric: took 1.506229118s for pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:12.184601   58902 pod_ready.go:39] duration metric: took 9.525652092s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:12.184622   58902 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:54:12.184683   58902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:54:12.204339   58902 api_server.go:72] duration metric: took 9.770541552s to wait for apiserver process to appear ...
	I1202 12:54:12.204361   58902 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:54:12.204383   58902 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8443/healthz ...
	I1202 12:54:12.208020   58902 api_server.go:279] https://192.168.72.203:8443/healthz returned 200:
	ok
	I1202 12:54:12.208957   58902 api_server.go:141] control plane version: v1.31.2
	I1202 12:54:12.208975   58902 api_server.go:131] duration metric: took 4.608337ms to wait for apiserver health ...
	I1202 12:54:12.208982   58902 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:54:12.215103   58902 system_pods.go:59] 9 kube-system pods found
	I1202 12:54:12.215123   58902 system_pods.go:61] "coredns-7c65d6cfc9-fwt6z" [06a23976-b261-4baa-8f66-e966addfb41a] Running
	I1202 12:54:12.215128   58902 system_pods.go:61] "coredns-7c65d6cfc9-tm4ct" [109d2f58-c2c8-4bf0-8232-fdbeb078305d] Running
	I1202 12:54:12.215132   58902 system_pods.go:61] "etcd-embed-certs-953044" [8f6fcd39-810b-45a4-91f0-9d449964beb1] Running
	I1202 12:54:12.215135   58902 system_pods.go:61] "kube-apiserver-embed-certs-953044" [976f7d34-0970-43b6-9b2a-18c2d7be0d63] Running
	I1202 12:54:12.215145   58902 system_pods.go:61] "kube-controller-manager-embed-certs-953044" [49c46bb7-8936-4c00-8764-56ae847aab27] Running
	I1202 12:54:12.215150   58902 system_pods.go:61] "kube-proxy-kg4z6" [c6b74e9c-47e4-4b1c-a219-685cc119219b] Running
	I1202 12:54:12.215157   58902 system_pods.go:61] "kube-scheduler-embed-certs-953044" [7940c2f7-a1b6-4c5b-879d-d72c3776ffbb] Running
	I1202 12:54:12.215171   58902 system_pods.go:61] "metrics-server-6867b74b74-fwhvq" [e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:12.215181   58902 system_pods.go:61] "storage-provisioner" [35fdd473-75b2-41d6-95bf-1bcab189dae5] Running
	I1202 12:54:12.215190   58902 system_pods.go:74] duration metric: took 6.203134ms to wait for pod list to return data ...
	I1202 12:54:12.215198   58902 default_sa.go:34] waiting for default service account to be created ...
	I1202 12:54:12.217406   58902 default_sa.go:45] found service account: "default"
	I1202 12:54:12.217421   58902 default_sa.go:55] duration metric: took 2.217536ms for default service account to be created ...
	I1202 12:54:12.217427   58902 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 12:54:12.221673   58902 system_pods.go:86] 9 kube-system pods found
	I1202 12:54:12.221690   58902 system_pods.go:89] "coredns-7c65d6cfc9-fwt6z" [06a23976-b261-4baa-8f66-e966addfb41a] Running
	I1202 12:54:12.221695   58902 system_pods.go:89] "coredns-7c65d6cfc9-tm4ct" [109d2f58-c2c8-4bf0-8232-fdbeb078305d] Running
	I1202 12:54:12.221701   58902 system_pods.go:89] "etcd-embed-certs-953044" [8f6fcd39-810b-45a4-91f0-9d449964beb1] Running
	I1202 12:54:12.221705   58902 system_pods.go:89] "kube-apiserver-embed-certs-953044" [976f7d34-0970-43b6-9b2a-18c2d7be0d63] Running
	I1202 12:54:12.221709   58902 system_pods.go:89] "kube-controller-manager-embed-certs-953044" [49c46bb7-8936-4c00-8764-56ae847aab27] Running
	I1202 12:54:12.221712   58902 system_pods.go:89] "kube-proxy-kg4z6" [c6b74e9c-47e4-4b1c-a219-685cc119219b] Running
	I1202 12:54:12.221716   58902 system_pods.go:89] "kube-scheduler-embed-certs-953044" [7940c2f7-a1b6-4c5b-879d-d72c3776ffbb] Running
	I1202 12:54:12.221724   58902 system_pods.go:89] "metrics-server-6867b74b74-fwhvq" [e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:12.221729   58902 system_pods.go:89] "storage-provisioner" [35fdd473-75b2-41d6-95bf-1bcab189dae5] Running
	I1202 12:54:12.221736   58902 system_pods.go:126] duration metric: took 4.304449ms to wait for k8s-apps to be running ...
	I1202 12:54:12.221745   58902 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 12:54:12.221780   58902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:54:12.238687   58902 system_svc.go:56] duration metric: took 16.934566ms WaitForService to wait for kubelet
	I1202 12:54:12.238707   58902 kubeadm.go:582] duration metric: took 9.804914519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:54:12.238722   58902 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:54:12.268746   58902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:54:12.268776   58902 node_conditions.go:123] node cpu capacity is 2
	I1202 12:54:12.268790   58902 node_conditions.go:105] duration metric: took 30.063656ms to run NodePressure ...
	I1202 12:54:12.268802   58902 start.go:241] waiting for startup goroutines ...
	I1202 12:54:12.268813   58902 start.go:246] waiting for cluster config update ...
	I1202 12:54:12.268828   58902 start.go:255] writing updated cluster config ...
	I1202 12:54:12.269149   58902 ssh_runner.go:195] Run: rm -f paused
	I1202 12:54:12.315523   58902 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 12:54:12.317559   58902 out.go:177] * Done! kubectl is now configured to use "embed-certs-953044" cluster and "default" namespace by default
	I1202 12:54:11.252465   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:13.251203   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:15.251601   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:17.332421   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:17.751347   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:20.252108   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:20.404508   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:21.252458   57877 pod_ready.go:82] duration metric: took 4m0.007570673s for pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace to be "Ready" ...
	E1202 12:54:21.252479   57877 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1202 12:54:21.252487   57877 pod_ready.go:39] duration metric: took 4m2.808635222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:21.252501   57877 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:54:21.252524   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:21.252565   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:21.311644   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:21.311663   57877 cri.go:89] found id: ""
	I1202 12:54:21.311670   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:21.311712   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.316826   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:21.316881   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:21.366930   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:21.366951   57877 cri.go:89] found id: ""
	I1202 12:54:21.366959   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:21.366999   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.371132   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:21.371194   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:21.405238   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:21.405261   57877 cri.go:89] found id: ""
	I1202 12:54:21.405270   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:21.405312   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.409631   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:21.409687   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:21.444516   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:21.444535   57877 cri.go:89] found id: ""
	I1202 12:54:21.444542   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:21.444583   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.448736   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:21.448796   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:21.485458   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:21.485484   57877 cri.go:89] found id: ""
	I1202 12:54:21.485494   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:21.485546   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.489882   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:21.489953   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:21.525951   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:21.525971   57877 cri.go:89] found id: ""
	I1202 12:54:21.525978   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:21.526028   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.530141   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:21.530186   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:21.564886   57877 cri.go:89] found id: ""
	I1202 12:54:21.564909   57877 logs.go:282] 0 containers: []
	W1202 12:54:21.564920   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:21.564928   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:21.564981   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:21.601560   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:21.601585   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:21.601593   57877 cri.go:89] found id: ""
	I1202 12:54:21.601603   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:21.601660   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.605710   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.609870   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:21.609892   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:21.645558   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:21.645581   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:21.680733   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:21.680764   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:21.731429   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:21.731452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:21.764658   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:21.764680   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:22.249475   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:22.249511   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:22.305127   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:22.305162   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:22.369496   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:22.369528   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:22.384486   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:22.384510   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:22.425402   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:22.425424   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:22.463801   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:22.463828   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:22.507022   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:22.507048   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:22.638422   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:22.638452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:25.190880   57877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:54:25.206797   57877 api_server.go:72] duration metric: took 4m14.027370187s to wait for apiserver process to appear ...
	I1202 12:54:25.206823   57877 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:54:25.206866   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:25.206924   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:25.241643   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:25.241669   57877 cri.go:89] found id: ""
	I1202 12:54:25.241680   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:25.241734   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.245997   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:25.246037   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:25.290955   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:25.290973   57877 cri.go:89] found id: ""
	I1202 12:54:25.290980   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:25.291029   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.295284   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:25.295329   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:25.333254   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:25.333275   57877 cri.go:89] found id: ""
	I1202 12:54:25.333284   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:25.333328   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.337649   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:25.337698   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:25.371662   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:25.371682   57877 cri.go:89] found id: ""
	I1202 12:54:25.371691   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:25.371739   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.376026   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:25.376075   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:25.411223   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:25.411238   57877 cri.go:89] found id: ""
	I1202 12:54:25.411245   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:25.411287   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.415307   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:25.415351   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:25.451008   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:25.451027   57877 cri.go:89] found id: ""
	I1202 12:54:25.451035   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:25.451089   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.455681   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:25.455727   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:25.499293   57877 cri.go:89] found id: ""
	I1202 12:54:25.499315   57877 logs.go:282] 0 containers: []
	W1202 12:54:25.499325   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:25.499332   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:25.499377   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:25.533874   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:25.533896   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:25.533903   57877 cri.go:89] found id: ""
	I1202 12:54:25.533912   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:25.533961   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.537993   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.541881   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:25.541899   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:25.645488   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:25.645512   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:25.683783   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:25.683807   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:26.120334   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:26.120367   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:26.484425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:26.190493   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:26.190521   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:26.235397   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:26.235421   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:26.285411   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:26.285452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:26.331807   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:26.331836   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:26.374437   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:26.374461   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:26.436459   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:26.436487   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:26.472126   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:26.472162   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:26.504819   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:26.504840   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:26.518789   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:26.518821   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:29.069521   57877 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I1202 12:54:29.074072   57877 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I1202 12:54:29.075022   57877 api_server.go:141] control plane version: v1.31.2
	I1202 12:54:29.075041   57877 api_server.go:131] duration metric: took 3.868210222s to wait for apiserver health ...
	I1202 12:54:29.075048   57877 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:54:29.075069   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:29.075112   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:29.110715   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:29.110735   57877 cri.go:89] found id: ""
	I1202 12:54:29.110742   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:29.110790   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.114994   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:29.115040   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:29.150431   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:29.150459   57877 cri.go:89] found id: ""
	I1202 12:54:29.150468   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:29.150525   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.154909   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:29.154967   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:29.198139   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:29.198162   57877 cri.go:89] found id: ""
	I1202 12:54:29.198172   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:29.198224   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.202969   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:29.203031   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:29.243771   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:29.243795   57877 cri.go:89] found id: ""
	I1202 12:54:29.243802   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:29.243843   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.248039   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:29.248106   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:29.286473   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:29.286492   57877 cri.go:89] found id: ""
	I1202 12:54:29.286498   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:29.286538   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.290543   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:29.290590   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:29.327899   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:29.327916   57877 cri.go:89] found id: ""
	I1202 12:54:29.327922   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:29.327961   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.332516   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:29.332571   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:29.368204   57877 cri.go:89] found id: ""
	I1202 12:54:29.368236   57877 logs.go:282] 0 containers: []
	W1202 12:54:29.368247   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:29.368255   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:29.368301   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:29.407333   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:29.407358   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:29.407364   57877 cri.go:89] found id: ""
	I1202 12:54:29.407372   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:29.407425   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.412153   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.416525   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:29.416548   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:29.457360   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:29.457394   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:29.495662   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:29.495691   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:29.549304   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:29.549331   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:29.585693   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:29.585718   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:29.621888   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:29.621912   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:29.670118   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:29.670153   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:29.685833   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:29.685855   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:29.792525   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:29.792555   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:29.837090   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:29.837138   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:29.872862   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:29.872893   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:30.228483   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:30.228523   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:30.298252   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:30.298285   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:32.851536   57877 system_pods.go:59] 8 kube-system pods found
	I1202 12:54:32.851562   57877 system_pods.go:61] "coredns-7c65d6cfc9-cvfc9" [f88088d1-7d48-498a-8251-f3a9ff436583] Running
	I1202 12:54:32.851567   57877 system_pods.go:61] "etcd-no-preload-658679" [950bf61a-2e04-43f9-805b-9cb98708d604] Running
	I1202 12:54:32.851571   57877 system_pods.go:61] "kube-apiserver-no-preload-658679" [3d4bfb53-da01-4973-bb3e-263a6d68463c] Running
	I1202 12:54:32.851574   57877 system_pods.go:61] "kube-controller-manager-no-preload-658679" [17db9644-06c4-4471-9186-e4de3efc7575] Running
	I1202 12:54:32.851577   57877 system_pods.go:61] "kube-proxy-2xf6j" [477778b7-12f0-4055-a583-edbf84c1a635] Running
	I1202 12:54:32.851580   57877 system_pods.go:61] "kube-scheduler-no-preload-658679" [617f769c-0ae5-4942-80c5-f85fafbc389e] Running
	I1202 12:54:32.851586   57877 system_pods.go:61] "metrics-server-6867b74b74-sn7tq" [8171d626-7036-4585-a967-8ff54f00cfc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:32.851590   57877 system_pods.go:61] "storage-provisioner" [5736f43f-6d15-41de-856a-048887f08742] Running
	I1202 12:54:32.851597   57877 system_pods.go:74] duration metric: took 3.776542886s to wait for pod list to return data ...
	I1202 12:54:32.851604   57877 default_sa.go:34] waiting for default service account to be created ...
	I1202 12:54:32.853911   57877 default_sa.go:45] found service account: "default"
	I1202 12:54:32.853928   57877 default_sa.go:55] duration metric: took 2.318516ms for default service account to be created ...
	I1202 12:54:32.853935   57877 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 12:54:32.858485   57877 system_pods.go:86] 8 kube-system pods found
	I1202 12:54:32.858508   57877 system_pods.go:89] "coredns-7c65d6cfc9-cvfc9" [f88088d1-7d48-498a-8251-f3a9ff436583] Running
	I1202 12:54:32.858513   57877 system_pods.go:89] "etcd-no-preload-658679" [950bf61a-2e04-43f9-805b-9cb98708d604] Running
	I1202 12:54:32.858519   57877 system_pods.go:89] "kube-apiserver-no-preload-658679" [3d4bfb53-da01-4973-bb3e-263a6d68463c] Running
	I1202 12:54:32.858523   57877 system_pods.go:89] "kube-controller-manager-no-preload-658679" [17db9644-06c4-4471-9186-e4de3efc7575] Running
	I1202 12:54:32.858526   57877 system_pods.go:89] "kube-proxy-2xf6j" [477778b7-12f0-4055-a583-edbf84c1a635] Running
	I1202 12:54:32.858530   57877 system_pods.go:89] "kube-scheduler-no-preload-658679" [617f769c-0ae5-4942-80c5-f85fafbc389e] Running
	I1202 12:54:32.858536   57877 system_pods.go:89] "metrics-server-6867b74b74-sn7tq" [8171d626-7036-4585-a967-8ff54f00cfc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:32.858540   57877 system_pods.go:89] "storage-provisioner" [5736f43f-6d15-41de-856a-048887f08742] Running
	I1202 12:54:32.858547   57877 system_pods.go:126] duration metric: took 4.607096ms to wait for k8s-apps to be running ...
	I1202 12:54:32.858555   57877 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 12:54:32.858592   57877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:54:32.874267   57877 system_svc.go:56] duration metric: took 15.704013ms WaitForService to wait for kubelet
	I1202 12:54:32.874293   57877 kubeadm.go:582] duration metric: took 4m21.694870267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:54:32.874311   57877 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:54:32.877737   57877 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:54:32.877757   57877 node_conditions.go:123] node cpu capacity is 2
	I1202 12:54:32.877768   57877 node_conditions.go:105] duration metric: took 3.452076ms to run NodePressure ...
	I1202 12:54:32.877782   57877 start.go:241] waiting for startup goroutines ...
	I1202 12:54:32.877791   57877 start.go:246] waiting for cluster config update ...
	I1202 12:54:32.877807   57877 start.go:255] writing updated cluster config ...
	I1202 12:54:32.878129   57877 ssh_runner.go:195] Run: rm -f paused
	I1202 12:54:32.926190   57877 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 12:54:32.927894   57877 out.go:177] * Done! kubectl is now configured to use "no-preload-658679" cluster and "default" namespace by default
	I1202 12:54:29.556420   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:35.636450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:38.708454   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:44.788462   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:47.860484   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:53.940448   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:57.012536   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:03.092433   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:06.164483   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:12.244464   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:15.316647   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:21.396479   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:24.468584   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:32.968600   59162 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:55:32.968731   59162 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:55:32.970229   59162 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:55:32.970291   59162 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:55:32.970394   59162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:55:32.970513   59162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:55:32.970629   59162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:55:32.970717   59162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:55:32.972396   59162 out.go:235]   - Generating certificates and keys ...
	I1202 12:55:32.972491   59162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:55:32.972577   59162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:55:32.972734   59162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:55:32.972823   59162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:55:32.972926   59162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:55:32.973006   59162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:55:32.973108   59162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:55:32.973192   59162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:55:32.973318   59162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:55:32.973429   59162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:55:32.973501   59162 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:55:32.973594   59162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:55:32.973658   59162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:55:32.973722   59162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:55:32.973819   59162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:55:32.973903   59162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:55:32.974041   59162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:55:32.974157   59162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:55:32.974206   59162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:55:32.974301   59162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:55:32.976508   59162 out.go:235]   - Booting up control plane ...
	I1202 12:55:32.976620   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:55:32.976741   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:55:32.976842   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:55:32.976957   59162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:55:32.977191   59162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:55:32.977281   59162 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:55:32.977342   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.977505   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.977579   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.977795   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.977906   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978091   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978174   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978394   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978497   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978743   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978756   59162 kubeadm.go:310] 
	I1202 12:55:32.978801   59162 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:55:32.978859   59162 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:55:32.978868   59162 kubeadm.go:310] 
	I1202 12:55:32.978914   59162 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:55:32.978961   59162 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:55:32.979078   59162 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:55:32.979088   59162 kubeadm.go:310] 
	I1202 12:55:32.979230   59162 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:55:32.979279   59162 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:55:32.979337   59162 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:55:32.979346   59162 kubeadm.go:310] 
	I1202 12:55:32.979484   59162 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:55:32.979580   59162 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:55:32.979593   59162 kubeadm.go:310] 
	I1202 12:55:32.979721   59162 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:55:32.979848   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:55:32.979968   59162 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:55:32.980059   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:55:32.980127   59162 kubeadm.go:310] 
	W1202 12:55:32.980202   59162 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 12:55:32.980267   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:55:33.452325   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:55:33.467527   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:55:33.477494   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:55:33.477522   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:55:33.477575   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:55:33.487333   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:55:33.487395   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:55:33.497063   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:55:33.506552   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:55:33.506605   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:55:33.515968   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:55:33.524922   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:55:33.524956   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:55:33.534339   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:55:33.543370   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:55:33.543403   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:55:33.552970   59162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:55:33.624833   59162 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:55:33.624990   59162 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:55:33.767688   59162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:55:33.767796   59162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:55:33.767909   59162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:55:33.935314   59162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:55:30.548478   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:33.624512   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:33.937193   59162 out.go:235]   - Generating certificates and keys ...
	I1202 12:55:33.937290   59162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:55:33.937402   59162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:55:33.937513   59162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:55:33.937620   59162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:55:33.937722   59162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:55:33.937791   59162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:55:33.937845   59162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:55:33.937896   59162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:55:33.937964   59162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:55:33.938028   59162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:55:33.938061   59162 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:55:33.938108   59162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:55:34.167163   59162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:55:35.008947   59162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:55:35.304057   59162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:55:35.385824   59162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:55:35.409687   59162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:55:35.413131   59162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:55:35.413218   59162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:55:35.569508   59162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:55:35.571455   59162 out.go:235]   - Booting up control plane ...
	I1202 12:55:35.571596   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:55:35.578476   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:55:35.579686   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:55:35.580586   59162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:55:35.582869   59162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:55:39.700423   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:42.772498   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:48.852452   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:51.924490   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:58.004488   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:01.076456   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:07.160425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:10.228467   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:15.585409   59162 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:56:15.585530   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:15.585792   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:16.308453   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:20.586011   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:20.586257   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:19.380488   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:25.460451   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:28.532425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:30.586783   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:30.587053   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:31.533399   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:56:31.533454   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:31.533725   61173 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653783"
	I1202 12:56:31.533749   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:31.533914   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:31.535344   61173 machine.go:96] duration metric: took 4m37.429393672s to provisionDockerMachine
	I1202 12:56:31.535386   61173 fix.go:56] duration metric: took 4m37.448634942s for fixHost
	I1202 12:56:31.535394   61173 start.go:83] releasing machines lock for "default-k8s-diff-port-653783", held for 4m37.448659715s
	W1202 12:56:31.535408   61173 start.go:714] error starting host: provision: host is not running
	W1202 12:56:31.535498   61173 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1202 12:56:31.535507   61173 start.go:729] Will try again in 5 seconds ...
	I1202 12:56:36.536323   61173 start.go:360] acquireMachinesLock for default-k8s-diff-port-653783: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:56:36.536434   61173 start.go:364] duration metric: took 71.395µs to acquireMachinesLock for "default-k8s-diff-port-653783"
	I1202 12:56:36.536463   61173 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:56:36.536471   61173 fix.go:54] fixHost starting: 
	I1202 12:56:36.536763   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:56:36.536790   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:56:36.551482   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38745
	I1202 12:56:36.551962   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:56:36.552383   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:56:36.552405   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:56:36.552689   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:56:36.552849   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:36.552968   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 12:56:36.554481   61173 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653783: state=Stopped err=<nil>
	I1202 12:56:36.554501   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	W1202 12:56:36.554652   61173 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:56:36.556508   61173 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653783" ...
	I1202 12:56:36.557534   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Start
	I1202 12:56:36.557690   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring networks are active...
	I1202 12:56:36.558371   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring network default is active
	I1202 12:56:36.558713   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring network mk-default-k8s-diff-port-653783 is active
	I1202 12:56:36.559023   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Getting domain xml...
	I1202 12:56:36.559739   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Creating domain...
	I1202 12:56:37.799440   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting to get IP...
	I1202 12:56:37.800397   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.800875   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.800918   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:37.800836   62278 retry.go:31] will retry after 192.811495ms: waiting for machine to come up
	I1202 12:56:37.995285   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.995743   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.995771   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:37.995697   62278 retry.go:31] will retry after 367.440749ms: waiting for machine to come up
	I1202 12:56:38.365229   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.365781   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.365810   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:38.365731   62278 retry.go:31] will retry after 350.196014ms: waiting for machine to come up
	I1202 12:56:38.717121   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.717650   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.717681   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:38.717590   62278 retry.go:31] will retry after 557.454725ms: waiting for machine to come up
	I1202 12:56:39.276110   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:39.276602   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:39.276631   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:39.276536   62278 retry.go:31] will retry after 735.275509ms: waiting for machine to come up
	I1202 12:56:40.013307   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.013865   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.013888   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:40.013833   62278 retry.go:31] will retry after 613.45623ms: waiting for machine to come up
	I1202 12:56:40.629220   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.629731   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.629776   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:40.629678   62278 retry.go:31] will retry after 748.849722ms: waiting for machine to come up
	I1202 12:56:41.380615   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:41.381052   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:41.381075   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:41.381023   62278 retry.go:31] will retry after 1.342160202s: waiting for machine to come up
	I1202 12:56:42.724822   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:42.725315   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:42.725355   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:42.725251   62278 retry.go:31] will retry after 1.693072543s: waiting for machine to come up
	I1202 12:56:44.420249   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:44.420700   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:44.420721   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:44.420658   62278 retry.go:31] will retry after 2.210991529s: waiting for machine to come up
	I1202 12:56:46.633486   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:46.633847   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:46.633875   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:46.633807   62278 retry.go:31] will retry after 2.622646998s: waiting for machine to come up
	I1202 12:56:50.587516   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:50.587731   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:49.257705   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:49.258232   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:49.258260   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:49.258186   62278 retry.go:31] will retry after 2.375973874s: waiting for machine to come up
	I1202 12:56:51.636055   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:51.636422   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:51.636450   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:51.636379   62278 retry.go:31] will retry after 3.118442508s: waiting for machine to come up
	I1202 12:56:54.757260   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.757665   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Found IP for machine: 192.168.39.154
	I1202 12:56:54.757689   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has current primary IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.757697   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Reserving static IP address...
	I1202 12:56:54.758088   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653783", mac: "52:54:00:00:f6:f0", ip: "192.168.39.154"} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.758108   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Reserved static IP address: 192.168.39.154
	I1202 12:56:54.758120   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | skip adding static IP to network mk-default-k8s-diff-port-653783 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653783", mac: "52:54:00:00:f6:f0", ip: "192.168.39.154"}
	I1202 12:56:54.758134   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Getting to WaitForSSH function...
	I1202 12:56:54.758142   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for SSH to be available...
	I1202 12:56:54.760333   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.760643   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.760672   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.760789   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Using SSH client type: external
	I1202 12:56:54.760812   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa (-rw-------)
	I1202 12:56:54.760855   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 12:56:54.760880   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | About to run SSH command:
	I1202 12:56:54.760892   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | exit 0
	I1202 12:56:54.884099   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | SSH cmd err, output: <nil>: 
	I1202 12:56:54.884435   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetConfigRaw
	I1202 12:56:54.885058   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:54.887519   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.887823   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.887854   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.888041   61173 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/config.json ...
	I1202 12:56:54.888333   61173 machine.go:93] provisionDockerMachine start ...
	I1202 12:56:54.888352   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:54.888564   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:54.890754   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.891062   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.891090   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.891254   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:54.891423   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:54.891560   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:54.891709   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:54.891851   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:54.892053   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:54.892070   61173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:56:54.996722   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1202 12:56:54.996751   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:54.996974   61173 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653783"
	I1202 12:56:54.997004   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:54.997202   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.000026   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.000425   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.000453   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.000624   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.000810   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.000978   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.001122   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.001308   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.001540   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.001562   61173 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653783 && echo "default-k8s-diff-port-653783" | sudo tee /etc/hostname
	I1202 12:56:55.122933   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653783
	
	I1202 12:56:55.122965   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.125788   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.126182   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.126219   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.126406   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.126555   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.126718   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.126834   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.126973   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.127180   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.127206   61173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653783/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 12:56:55.242263   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:56:55.242291   61173 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 12:56:55.242331   61173 buildroot.go:174] setting up certificates
	I1202 12:56:55.242340   61173 provision.go:84] configureAuth start
	I1202 12:56:55.242350   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:55.242604   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:55.245340   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.245685   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.245719   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.245882   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.248090   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.248481   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.248512   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.248659   61173 provision.go:143] copyHostCerts
	I1202 12:56:55.248718   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 12:56:55.248733   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:56:55.248810   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 12:56:55.248920   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 12:56:55.248931   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:56:55.248965   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 12:56:55.249039   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 12:56:55.249049   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:56:55.249081   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 12:56:55.249152   61173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653783 san=[127.0.0.1 192.168.39.154 default-k8s-diff-port-653783 localhost minikube]
	I1202 12:56:55.688887   61173 provision.go:177] copyRemoteCerts
	I1202 12:56:55.688948   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 12:56:55.688976   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.691486   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.691865   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.691896   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.692056   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.692239   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.692403   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.692524   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:55.777670   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 12:56:55.802466   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1202 12:56:55.826639   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 12:56:55.850536   61173 provision.go:87] duration metric: took 608.183552ms to configureAuth
	I1202 12:56:55.850560   61173 buildroot.go:189] setting minikube options for container-runtime
	I1202 12:56:55.850731   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:56:55.850813   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.853607   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.853991   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.854024   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.854122   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.854294   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.854436   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.854598   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.854734   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.854883   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.854899   61173 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 12:56:56.083902   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 12:56:56.083931   61173 machine.go:96] duration metric: took 1.195584241s to provisionDockerMachine
	I1202 12:56:56.083944   61173 start.go:293] postStartSetup for "default-k8s-diff-port-653783" (driver="kvm2")
	I1202 12:56:56.083957   61173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 12:56:56.083974   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.084276   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 12:56:56.084307   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.087400   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.087727   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.087750   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.087909   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.088088   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.088272   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.088448   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.170612   61173 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 12:56:56.175344   61173 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 12:56:56.175366   61173 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 12:56:56.175454   61173 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 12:56:56.175529   61173 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 12:56:56.175610   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 12:56:56.185033   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:56:56.209569   61173 start.go:296] duration metric: took 125.611321ms for postStartSetup
	I1202 12:56:56.209605   61173 fix.go:56] duration metric: took 19.673134089s for fixHost
	I1202 12:56:56.209623   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.212600   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.212883   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.212923   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.213137   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.213395   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.213575   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.213708   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.213854   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:56.214014   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:56.214032   61173 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 12:56:56.320723   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733144216.287359296
	
	I1202 12:56:56.320744   61173 fix.go:216] guest clock: 1733144216.287359296
	I1202 12:56:56.320753   61173 fix.go:229] Guest: 2024-12-02 12:56:56.287359296 +0000 UTC Remote: 2024-12-02 12:56:56.209609687 +0000 UTC m=+302.261021771 (delta=77.749609ms)
	I1202 12:56:56.320776   61173 fix.go:200] guest clock delta is within tolerance: 77.749609ms
	I1202 12:56:56.320781   61173 start.go:83] releasing machines lock for "default-k8s-diff-port-653783", held for 19.784333398s
	I1202 12:56:56.320797   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.321011   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:56.323778   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.324117   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.324136   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.324289   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324759   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324921   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324984   61173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 12:56:56.325034   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.325138   61173 ssh_runner.go:195] Run: cat /version.json
	I1202 12:56:56.325164   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.327744   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328000   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328058   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.328083   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328262   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.328373   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.328411   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328411   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.328584   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.328584   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.328774   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.328769   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.328908   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.329007   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.405370   61173 ssh_runner.go:195] Run: systemctl --version
	I1202 12:56:56.427743   61173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 12:56:56.574416   61173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 12:56:56.580858   61173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 12:56:56.580948   61173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 12:56:56.597406   61173 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 12:56:56.597427   61173 start.go:495] detecting cgroup driver to use...
	I1202 12:56:56.597472   61173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 12:56:56.612456   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 12:56:56.625811   61173 docker.go:217] disabling cri-docker service (if available) ...
	I1202 12:56:56.625847   61173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 12:56:56.642677   61173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 12:56:56.657471   61173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 12:56:56.776273   61173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 12:56:56.949746   61173 docker.go:233] disabling docker service ...
	I1202 12:56:56.949807   61173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 12:56:56.964275   61173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 12:56:56.977461   61173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 12:56:57.091134   61173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 12:56:57.209421   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 12:56:57.223153   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 12:56:57.241869   61173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 12:56:57.241933   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.252117   61173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 12:56:57.252174   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.262799   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.275039   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.285987   61173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 12:56:57.296968   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.307242   61173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.324555   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.335395   61173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 12:56:57.344411   61173 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 12:56:57.344450   61173 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 12:56:57.357400   61173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 12:56:57.366269   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:56:57.486764   61173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 12:56:57.574406   61173 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 12:56:57.574464   61173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 12:56:57.579268   61173 start.go:563] Will wait 60s for crictl version
	I1202 12:56:57.579328   61173 ssh_runner.go:195] Run: which crictl
	I1202 12:56:57.583110   61173 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 12:56:57.621921   61173 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 12:56:57.622003   61173 ssh_runner.go:195] Run: crio --version
	I1202 12:56:57.650543   61173 ssh_runner.go:195] Run: crio --version
	I1202 12:56:57.683842   61173 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 12:56:57.684861   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:57.687188   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:57.687459   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:57.687505   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:57.687636   61173 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 12:56:57.691723   61173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:56:57.704869   61173 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 12:56:57.704999   61173 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:56:57.705054   61173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:56:57.738780   61173 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 12:56:57.738828   61173 ssh_runner.go:195] Run: which lz4
	I1202 12:56:57.743509   61173 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 12:56:57.747763   61173 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 12:56:57.747784   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 12:56:59.105988   61173 crio.go:462] duration metric: took 1.362506994s to copy over tarball
	I1202 12:56:59.106062   61173 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 12:57:01.191007   61173 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.084920502s)
	I1202 12:57:01.191031   61173 crio.go:469] duration metric: took 2.085014298s to extract the tarball
	I1202 12:57:01.191038   61173 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 12:57:01.229238   61173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:57:01.272133   61173 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 12:57:01.272156   61173 cache_images.go:84] Images are preloaded, skipping loading
	I1202 12:57:01.272164   61173 kubeadm.go:934] updating node { 192.168.39.154 8444 v1.31.2 crio true true} ...
	I1202 12:57:01.272272   61173 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 12:57:01.272330   61173 ssh_runner.go:195] Run: crio config
	I1202 12:57:01.318930   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:57:01.318957   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:57:01.318968   61173 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 12:57:01.318994   61173 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653783 NodeName:default-k8s-diff-port-653783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 12:57:01.319125   61173 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.154"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 12:57:01.319184   61173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 12:57:01.330162   61173 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 12:57:01.330226   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 12:57:01.340217   61173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 12:57:01.356786   61173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 12:57:01.373210   61173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1202 12:57:01.390184   61173 ssh_runner.go:195] Run: grep 192.168.39.154	control-plane.minikube.internal$ /etc/hosts
	I1202 12:57:01.394099   61173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:57:01.406339   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:57:01.526518   61173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:57:01.543879   61173 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783 for IP: 192.168.39.154
	I1202 12:57:01.543899   61173 certs.go:194] generating shared ca certs ...
	I1202 12:57:01.543920   61173 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:57:01.544070   61173 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 12:57:01.544134   61173 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 12:57:01.544147   61173 certs.go:256] generating profile certs ...
	I1202 12:57:01.544285   61173 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/client.key
	I1202 12:57:01.544377   61173 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.key.44fa7240
	I1202 12:57:01.544429   61173 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.key
	I1202 12:57:01.544579   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 12:57:01.544608   61173 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 12:57:01.544617   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 12:57:01.544636   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 12:57:01.544659   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 12:57:01.544688   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 12:57:01.544727   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:57:01.545381   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 12:57:01.580933   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 12:57:01.621199   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 12:57:01.648996   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 12:57:01.681428   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 12:57:01.710907   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 12:57:01.741414   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 12:57:01.766158   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 12:57:01.789460   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 12:57:01.812569   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 12:57:01.836007   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 12:57:01.858137   61173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 12:57:01.874315   61173 ssh_runner.go:195] Run: openssl version
	I1202 12:57:01.880190   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 12:57:01.893051   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.898250   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.898306   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.904207   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 12:57:01.915975   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 12:57:01.927977   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.932436   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.932478   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.938049   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 12:57:01.948744   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 12:57:01.959472   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.963806   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.963839   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.969412   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 12:57:01.980743   61173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:57:01.986211   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 12:57:01.992717   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 12:57:01.998781   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 12:57:02.004934   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 12:57:02.010903   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 12:57:02.016677   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 12:57:02.022595   61173 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:57:02.022680   61173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 12:57:02.022711   61173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:57:02.060425   61173 cri.go:89] found id: ""
	I1202 12:57:02.060497   61173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 12:57:02.070807   61173 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1202 12:57:02.070827   61173 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1202 12:57:02.070868   61173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 12:57:02.081036   61173 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 12:57:02.082088   61173 kubeconfig.go:125] found "default-k8s-diff-port-653783" server: "https://192.168.39.154:8444"
	I1202 12:57:02.084179   61173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 12:57:02.094381   61173 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.154
	I1202 12:57:02.094429   61173 kubeadm.go:1160] stopping kube-system containers ...
	I1202 12:57:02.094441   61173 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 12:57:02.094485   61173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:57:02.129098   61173 cri.go:89] found id: ""
	I1202 12:57:02.129152   61173 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 12:57:02.146731   61173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:57:02.156860   61173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:57:02.156881   61173 kubeadm.go:157] found existing configuration files:
	
	I1202 12:57:02.156924   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1202 12:57:02.166273   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:57:02.166322   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:57:02.175793   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1202 12:57:02.184665   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:57:02.184707   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:57:02.194243   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1202 12:57:02.203173   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:57:02.203217   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:57:02.212563   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1202 12:57:02.221640   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:57:02.221682   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:57:02.230764   61173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:57:02.241691   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:02.353099   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.283720   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.487082   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.564623   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.644136   61173 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:57:03.644219   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:04.144882   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:04.644873   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.144778   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.645022   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.662892   61173 api_server.go:72] duration metric: took 2.01875734s to wait for apiserver process to appear ...
	I1202 12:57:05.662920   61173 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:57:05.662943   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.328451   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 12:57:08.328479   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 12:57:08.328492   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.368504   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 12:57:08.368547   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 12:57:08.664065   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.681253   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:57:08.681319   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:57:09.163310   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:09.169674   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:57:09.169699   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:57:09.663220   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:09.667397   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 200:
	ok
	I1202 12:57:09.675558   61173 api_server.go:141] control plane version: v1.31.2
	I1202 12:57:09.675582   61173 api_server.go:131] duration metric: took 4.012653559s to wait for apiserver health ...
	I1202 12:57:09.675592   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:57:09.675601   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:57:09.677275   61173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 12:57:09.678527   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 12:57:09.690640   61173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 12:57:09.708185   61173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:57:09.724719   61173 system_pods.go:59] 8 kube-system pods found
	I1202 12:57:09.724747   61173 system_pods.go:61] "coredns-7c65d6cfc9-7g74d" [a35c0ad2-6c02-4e14-afe5-887b3b5fd70f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 12:57:09.724755   61173 system_pods.go:61] "etcd-default-k8s-diff-port-653783" [25bc45db-481f-4c88-853b-105a32e1e8e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 12:57:09.724763   61173 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653783" [af0f2123-8eac-4f90-bc06-1fc1cb10deda] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 12:57:09.724769   61173 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653783" [c18b1705-438b-4954-941e-cfe5a3a0f6fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 12:57:09.724777   61173 system_pods.go:61] "kube-proxy-5t9gh" [35d08e89-5ad8-4fcb-9bff-5c12bc1fb497] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 12:57:09.724782   61173 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653783" [0db501e4-36fb-4a67-b11d-d6d9f3fa1383] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 12:57:09.724789   61173 system_pods.go:61] "metrics-server-6867b74b74-9v79b" [418c7615-5d41-4a24-b497-674f55573a0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:57:09.724794   61173 system_pods.go:61] "storage-provisioner" [dab6b0c7-8e10-435f-a57c-76044eaa11c0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 12:57:09.724799   61173 system_pods.go:74] duration metric: took 16.592713ms to wait for pod list to return data ...
	I1202 12:57:09.724808   61173 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:57:09.731235   61173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:57:09.731260   61173 node_conditions.go:123] node cpu capacity is 2
	I1202 12:57:09.731274   61173 node_conditions.go:105] duration metric: took 6.4605ms to run NodePressure ...
	I1202 12:57:09.731293   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:10.021346   61173 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1202 12:57:10.025152   61173 kubeadm.go:739] kubelet initialised
	I1202 12:57:10.025171   61173 kubeadm.go:740] duration metric: took 3.798597ms waiting for restarted kubelet to initialise ...
	I1202 12:57:10.025178   61173 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:57:10.029834   61173 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.033699   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.033718   61173 pod_ready.go:82] duration metric: took 3.86169ms for pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.033726   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.033731   61173 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.037291   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.037308   61173 pod_ready.go:82] duration metric: took 3.569468ms for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.037317   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.037322   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.041016   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.041035   61173 pod_ready.go:82] duration metric: took 3.705222ms for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.041046   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.041071   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:12.047581   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:14.048663   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:16.547831   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:19.047816   61173 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:19.047839   61173 pod_ready.go:82] duration metric: took 9.006753973s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.047850   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5t9gh" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.052277   61173 pod_ready.go:93] pod "kube-proxy-5t9gh" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:19.052296   61173 pod_ready.go:82] duration metric: took 4.440131ms for pod "kube-proxy-5t9gh" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.052305   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:21.058989   61173 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:22.558501   61173 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:22.558524   61173 pod_ready.go:82] duration metric: took 3.506212984s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:22.558533   61173 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:24.564668   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:27.064209   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:30.586451   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:57:30.586705   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:57:30.586735   59162 kubeadm.go:310] 
	I1202 12:57:30.586786   59162 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:57:30.586842   59162 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:57:30.586859   59162 kubeadm.go:310] 
	I1202 12:57:30.586924   59162 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:57:30.586990   59162 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:57:30.587140   59162 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:57:30.587152   59162 kubeadm.go:310] 
	I1202 12:57:30.587292   59162 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:57:30.587347   59162 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:57:30.587387   59162 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:57:30.587405   59162 kubeadm.go:310] 
	I1202 12:57:30.587557   59162 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:57:30.587642   59162 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:57:30.587655   59162 kubeadm.go:310] 
	I1202 12:57:30.587751   59162 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:57:30.587841   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:57:30.587923   59162 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:57:30.588029   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:57:30.588043   59162 kubeadm.go:310] 
	I1202 12:57:30.588959   59162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:57:30.589087   59162 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:57:30.589211   59162 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:57:30.589277   59162 kubeadm.go:394] duration metric: took 7m57.557592718s to StartCluster
	I1202 12:57:30.589312   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:57:30.589358   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:57:30.634368   59162 cri.go:89] found id: ""
	I1202 12:57:30.634402   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.634414   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:57:30.634423   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:57:30.634489   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:57:30.669582   59162 cri.go:89] found id: ""
	I1202 12:57:30.669605   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.669617   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:57:30.669625   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:57:30.669679   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:57:30.707779   59162 cri.go:89] found id: ""
	I1202 12:57:30.707805   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.707815   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:57:30.707823   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:57:30.707878   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:57:30.745724   59162 cri.go:89] found id: ""
	I1202 12:57:30.745751   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.745761   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:57:30.745768   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:57:30.745816   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:57:30.782946   59162 cri.go:89] found id: ""
	I1202 12:57:30.782969   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.782980   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:57:30.782987   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:57:30.783040   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:57:30.821743   59162 cri.go:89] found id: ""
	I1202 12:57:30.821776   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.821787   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:57:30.821795   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:57:30.821843   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:57:30.859754   59162 cri.go:89] found id: ""
	I1202 12:57:30.859783   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.859793   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:57:30.859801   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:57:30.859876   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:57:30.893632   59162 cri.go:89] found id: ""
	I1202 12:57:30.893660   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.893668   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:57:30.893677   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:57:30.893690   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:57:30.946387   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:57:30.946413   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:57:30.960540   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:57:30.960565   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:57:31.038246   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:57:31.038267   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:57:31.038279   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:57:31.155549   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:57:31.155584   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 12:57:31.221709   59162 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1202 12:57:31.221773   59162 out.go:270] * 
	W1202 12:57:31.221846   59162 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:57:31.221868   59162 out.go:270] * 
	W1202 12:57:31.222987   59162 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 12:57:31.226661   59162 out.go:201] 
	W1202 12:57:31.227691   59162 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:57:31.227739   59162 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 12:57:31.227763   59162 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 12:57:31.229696   59162 out.go:201] 
	I1202 12:57:29.064892   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:31.065451   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:33.564442   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:36.064844   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:38.065020   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:40.565467   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:43.065021   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:45.065674   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:47.565692   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:50.064566   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:52.065673   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:54.563919   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:56.565832   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:59.064489   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:01.064627   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:03.066470   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:05.565311   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:07.565342   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:10.065050   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:12.565026   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:15.065113   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:17.065377   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:19.570428   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:22.065941   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:24.564883   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:27.064907   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:29.565025   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:31.565662   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:33.566049   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:36.064675   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:38.064820   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:40.065555   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:42.565304   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:44.566076   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:47.064538   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:49.064571   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:51.064914   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:53.065942   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:55.564490   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:57.566484   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:00.064321   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:02.065385   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:04.065541   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:06.065687   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:08.564349   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:11.064985   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:13.065285   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:15.565546   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:17.569757   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:20.065490   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:22.565206   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:25.065588   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:27.065818   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:29.066671   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:31.565998   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:34.064527   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:36.064698   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:38.065158   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:40.563432   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:42.571603   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:45.065725   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:47.565321   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:50.065712   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:52.564522   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:55.065989   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:57.563712   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:59.565908   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:02.065655   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:04.564520   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:07.065360   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:09.566223   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:12.065149   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:14.564989   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:17.064321   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:19.066069   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:21.066247   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:23.564474   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:26.065294   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:28.563804   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:30.565317   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:32.565978   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:35.064896   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:37.065442   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:39.065516   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:41.565297   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:44.064849   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:46.564956   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:49.065151   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:51.065892   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:53.570359   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:56.064144   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:58.065042   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:00.065116   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:02.065474   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:04.564036   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:06.564531   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:08.565018   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:10.565163   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:13.065421   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:15.065623   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:17.564985   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:20.065093   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:22.065732   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:22.559325   61173 pod_ready.go:82] duration metric: took 4m0.000776679s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" ...
	E1202 13:01:22.559360   61173 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" (will not retry!)
	I1202 13:01:22.559393   61173 pod_ready.go:39] duration metric: took 4m12.534205059s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:01:22.559419   61173 kubeadm.go:597] duration metric: took 4m20.488585813s to restartPrimaryControlPlane
	W1202 13:01:22.559474   61173 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 13:01:22.559501   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 13:01:48.872503   61173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.312974314s)
	I1202 13:01:48.872571   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 13:01:48.893337   61173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 13:01:48.921145   61173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 13:01:48.934577   61173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 13:01:48.934594   61173 kubeadm.go:157] found existing configuration files:
	
	I1202 13:01:48.934639   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1202 13:01:48.956103   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 13:01:48.956162   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 13:01:48.967585   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1202 13:01:48.984040   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 13:01:48.984084   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 13:01:48.994049   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1202 13:01:49.003811   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 13:01:49.003859   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 13:01:49.013646   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1202 13:01:49.023003   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 13:01:49.023051   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 13:01:49.032678   61173 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 13:01:49.196294   61173 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 13:01:57.349437   61173 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 13:01:57.349497   61173 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 13:01:57.349571   61173 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 13:01:57.349740   61173 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 13:01:57.349882   61173 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 13:01:57.349976   61173 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 13:01:57.351474   61173 out.go:235]   - Generating certificates and keys ...
	I1202 13:01:57.351576   61173 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 13:01:57.351634   61173 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 13:01:57.351736   61173 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 13:01:57.351842   61173 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 13:01:57.351952   61173 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 13:01:57.352035   61173 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 13:01:57.352132   61173 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 13:01:57.352202   61173 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 13:01:57.352325   61173 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 13:01:57.352439   61173 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 13:01:57.352515   61173 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 13:01:57.352608   61173 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 13:01:57.352689   61173 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 13:01:57.352775   61173 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 13:01:57.352860   61173 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 13:01:57.352962   61173 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 13:01:57.353058   61173 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 13:01:57.353172   61173 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 13:01:57.353295   61173 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 13:01:57.354669   61173 out.go:235]   - Booting up control plane ...
	I1202 13:01:57.354756   61173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 13:01:57.354829   61173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 13:01:57.354884   61173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 13:01:57.354984   61173 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 13:01:57.355073   61173 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 13:01:57.355127   61173 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 13:01:57.355280   61173 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 13:01:57.355435   61173 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 13:01:57.355528   61173 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.24354ms
	I1202 13:01:57.355641   61173 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 13:01:57.355720   61173 kubeadm.go:310] [api-check] The API server is healthy after 5.002367533s
	I1202 13:01:57.355832   61173 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 13:01:57.355945   61173 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 13:01:57.356000   61173 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 13:01:57.356175   61173 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-653783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 13:01:57.356246   61173 kubeadm.go:310] [bootstrap-token] Using token: 0oxhck.9gzdpio1kzs08rgi
	I1202 13:01:57.357582   61173 out.go:235]   - Configuring RBAC rules ...
	I1202 13:01:57.357692   61173 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 13:01:57.357798   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 13:01:57.357973   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 13:01:57.358102   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 13:01:57.358246   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 13:01:57.358361   61173 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 13:01:57.358460   61173 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 13:01:57.358497   61173 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 13:01:57.358547   61173 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 13:01:57.358557   61173 kubeadm.go:310] 
	I1202 13:01:57.358615   61173 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 13:01:57.358625   61173 kubeadm.go:310] 
	I1202 13:01:57.358691   61173 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 13:01:57.358698   61173 kubeadm.go:310] 
	I1202 13:01:57.358730   61173 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 13:01:57.358800   61173 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 13:01:57.358878   61173 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 13:01:57.358889   61173 kubeadm.go:310] 
	I1202 13:01:57.358954   61173 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 13:01:57.358961   61173 kubeadm.go:310] 
	I1202 13:01:57.358999   61173 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 13:01:57.359005   61173 kubeadm.go:310] 
	I1202 13:01:57.359047   61173 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 13:01:57.359114   61173 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 13:01:57.359179   61173 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 13:01:57.359185   61173 kubeadm.go:310] 
	I1202 13:01:57.359271   61173 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 13:01:57.359364   61173 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 13:01:57.359377   61173 kubeadm.go:310] 
	I1202 13:01:57.359451   61173 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 0oxhck.9gzdpio1kzs08rgi \
	I1202 13:01:57.359561   61173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 13:01:57.359581   61173 kubeadm.go:310] 	--control-plane 
	I1202 13:01:57.359587   61173 kubeadm.go:310] 
	I1202 13:01:57.359666   61173 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 13:01:57.359678   61173 kubeadm.go:310] 
	I1202 13:01:57.359745   61173 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 0oxhck.9gzdpio1kzs08rgi \
	I1202 13:01:57.359848   61173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 13:01:57.359874   61173 cni.go:84] Creating CNI manager for ""
	I1202 13:01:57.359887   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 13:01:57.361282   61173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 13:01:57.362319   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 13:01:57.373455   61173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 13:01:57.393003   61173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 13:01:57.393055   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:57.393136   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653783 minikube.k8s.io/updated_at=2024_12_02T13_01_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=default-k8s-diff-port-653783 minikube.k8s.io/primary=true
	I1202 13:01:57.426483   61173 ops.go:34] apiserver oom_adj: -16
	I1202 13:01:57.584458   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:58.084831   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:58.585450   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:59.084976   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:59.585068   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:00.085470   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:00.584722   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:01.084770   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:01.585414   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:01.725480   61173 kubeadm.go:1113] duration metric: took 4.332474868s to wait for elevateKubeSystemPrivileges
	I1202 13:02:01.725523   61173 kubeadm.go:394] duration metric: took 4m59.70293206s to StartCluster
	I1202 13:02:01.725545   61173 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:02:01.725633   61173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 13:02:01.730008   61173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:02:01.730438   61173 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 13:02:01.730586   61173 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 13:02:01.730685   61173 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653783"
	I1202 13:02:01.730703   61173 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653783"
	I1202 13:02:01.730707   61173 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653783"
	I1202 13:02:01.730719   61173 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653783"
	I1202 13:02:01.730734   61173 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653783"
	I1202 13:02:01.730736   61173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653783"
	W1202 13:02:01.730746   61173 addons.go:243] addon metrics-server should already be in state true
	I1202 13:02:01.730776   61173 host.go:66] Checking if "default-k8s-diff-port-653783" exists ...
	W1202 13:02:01.730711   61173 addons.go:243] addon storage-provisioner should already be in state true
	I1202 13:02:01.730865   61173 host.go:66] Checking if "default-k8s-diff-port-653783" exists ...
	I1202 13:02:01.731186   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.731204   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.731215   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.731220   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.731235   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.731255   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.730707   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:02:01.731895   61173 out.go:177] * Verifying Kubernetes components...
	I1202 13:02:01.733515   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 13:02:01.748534   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1202 13:02:01.749156   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.749717   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.749743   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.750167   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.750734   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.750771   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.750997   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
	I1202 13:02:01.751714   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44867
	I1202 13:02:01.751911   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.752088   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.752388   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.752406   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.752785   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.753212   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.753240   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.753514   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.753527   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.753807   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.753953   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.756554   61173 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653783"
	W1202 13:02:01.756567   61173 addons.go:243] addon default-storageclass should already be in state true
	I1202 13:02:01.756588   61173 host.go:66] Checking if "default-k8s-diff-port-653783" exists ...
	I1202 13:02:01.756803   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.756824   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.769388   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37033
	I1202 13:02:01.769867   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.770303   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.770328   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.770810   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.770984   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.771974   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I1202 13:02:01.772430   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.773043   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.773068   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.773294   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 13:02:01.773441   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.773707   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.775187   61173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 13:02:01.775514   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 13:02:01.776461   61173 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 13:02:01.776482   61173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 13:02:01.776499   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 13:02:01.776562   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46841
	I1202 13:02:01.776927   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.777077   61173 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1202 13:02:01.777497   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.777509   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.777795   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.778197   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 13:02:01.778215   61173 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 13:02:01.778235   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 13:02:01.778284   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.778315   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.779324   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.780389   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 13:02:01.780472   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.780336   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 13:02:01.780832   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 13:02:01.780996   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 13:02:01.781101   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 13:02:01.781390   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.781588   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 13:02:01.781608   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.781737   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 13:02:01.781886   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 13:02:01.781973   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 13:02:01.782063   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 13:02:01.793947   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45243
	I1202 13:02:01.794298   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.794720   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.794737   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.795031   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.795200   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.796909   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 13:02:01.797092   61173 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 13:02:01.797104   61173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 13:02:01.797121   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 13:02:01.799831   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.800160   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 13:02:01.800191   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.800416   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 13:02:01.800595   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 13:02:01.800702   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 13:02:01.800823   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 13:02:01.936668   61173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 13:02:01.954328   61173 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653783" to be "Ready" ...
	I1202 13:02:01.968409   61173 node_ready.go:49] node "default-k8s-diff-port-653783" has status "Ready":"True"
	I1202 13:02:01.968427   61173 node_ready.go:38] duration metric: took 14.066432ms for node "default-k8s-diff-port-653783" to be "Ready" ...
	I1202 13:02:01.968436   61173 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:02:01.981818   61173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2stsx" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:02.071558   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 13:02:02.071590   61173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1202 13:02:02.076260   61173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 13:02:02.085318   61173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 13:02:02.098342   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 13:02:02.098363   61173 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 13:02:02.156135   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 13:02:02.156165   61173 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 13:02:02.175618   61173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 13:02:02.359810   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:02.359841   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:02.360111   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:02.360201   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:02.360179   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:02.360225   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:02.360246   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:02.360518   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:02.360528   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:02.360532   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:02.366246   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:02.366270   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:02.366633   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:02.366647   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:02.366660   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.134955   61173 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049592704s)
	I1202 13:02:03.135040   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135059   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.135084   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135114   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.135342   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.135392   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.135413   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135432   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.135533   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.135565   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.135584   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135602   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.136554   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.136558   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.136569   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.136568   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:03.136572   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.136579   61173 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-653783"
	I1202 13:02:03.138071   61173 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1202 13:02:03.139462   61173 addons.go:510] duration metric: took 1.408893663s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1202 13:02:03.986445   61173 pod_ready.go:93] pod "coredns-7c65d6cfc9-2stsx" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:03.986471   61173 pod_ready.go:82] duration metric: took 2.0046319s for pod "coredns-7c65d6cfc9-2stsx" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:03.986482   61173 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:04.492973   61173 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:04.492995   61173 pod_ready.go:82] duration metric: took 506.506566ms for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:04.493004   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:06.500118   61173 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 13:02:08.502468   61173 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 13:02:08.999764   61173 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:08.999785   61173 pod_ready.go:82] duration metric: took 4.506775084s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:08.999795   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.005354   61173 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:10.005376   61173 pod_ready.go:82] duration metric: took 1.005574607s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.005385   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d4vw4" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.010948   61173 pod_ready.go:93] pod "kube-proxy-d4vw4" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:10.010964   61173 pod_ready.go:82] duration metric: took 5.574069ms for pod "kube-proxy-d4vw4" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.010972   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.014901   61173 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:10.014918   61173 pod_ready.go:82] duration metric: took 3.938654ms for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.014927   61173 pod_ready.go:39] duration metric: took 8.046482137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:02:10.014943   61173 api_server.go:52] waiting for apiserver process to appear ...
	I1202 13:02:10.014994   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 13:02:10.032401   61173 api_server.go:72] duration metric: took 8.301924942s to wait for apiserver process to appear ...
	I1202 13:02:10.032418   61173 api_server.go:88] waiting for apiserver healthz status ...
	I1202 13:02:10.032436   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 13:02:10.036406   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 200:
	ok
	I1202 13:02:10.037035   61173 api_server.go:141] control plane version: v1.31.2
	I1202 13:02:10.037052   61173 api_server.go:131] duration metric: took 4.627223ms to wait for apiserver health ...
	I1202 13:02:10.037061   61173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 13:02:10.042707   61173 system_pods.go:59] 9 kube-system pods found
	I1202 13:02:10.042731   61173 system_pods.go:61] "coredns-7c65d6cfc9-2qfb5" [13f41c48-90af-4524-98fc-22daf331fbcb] Running
	I1202 13:02:10.042740   61173 system_pods.go:61] "coredns-7c65d6cfc9-2stsx" [3cb9697b-974e-4f8e-9931-38fe3d971940] Running
	I1202 13:02:10.042746   61173 system_pods.go:61] "etcd-default-k8s-diff-port-653783" [adfc38c0-b63b-404d-b279-03f3265f1cf6] Running
	I1202 13:02:10.042752   61173 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653783" [c09effaa-0cea-47db-aca6-8f1d6612b194] Running
	I1202 13:02:10.042758   61173 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653783" [7efc2e68-5d67-4ee7-8b00-e23124acdf63] Running
	I1202 13:02:10.042762   61173 system_pods.go:61] "kube-proxy-d4vw4" [487da76d-2fae-4df0-b663-0cf128ae2911] Running
	I1202 13:02:10.042768   61173 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653783" [94e85eeb-5304-4258-b76b-ac8eb0461069] Running
	I1202 13:02:10.042776   61173 system_pods.go:61] "metrics-server-6867b74b74-tcr8r" [2f017719-26ad-44ca-a44a-e6c20cd6438c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 13:02:10.042782   61173 system_pods.go:61] "storage-provisioner" [8975d342-96fa-4173-b477-e25909ca76da] Running
	I1202 13:02:10.042794   61173 system_pods.go:74] duration metric: took 5.724009ms to wait for pod list to return data ...
	I1202 13:02:10.042800   61173 default_sa.go:34] waiting for default service account to be created ...
	I1202 13:02:10.045407   61173 default_sa.go:45] found service account: "default"
	I1202 13:02:10.045422   61173 default_sa.go:55] duration metric: took 2.615305ms for default service account to be created ...
	I1202 13:02:10.045428   61173 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 13:02:10.050473   61173 system_pods.go:86] 9 kube-system pods found
	I1202 13:02:10.050494   61173 system_pods.go:89] "coredns-7c65d6cfc9-2qfb5" [13f41c48-90af-4524-98fc-22daf331fbcb] Running
	I1202 13:02:10.050499   61173 system_pods.go:89] "coredns-7c65d6cfc9-2stsx" [3cb9697b-974e-4f8e-9931-38fe3d971940] Running
	I1202 13:02:10.050505   61173 system_pods.go:89] "etcd-default-k8s-diff-port-653783" [adfc38c0-b63b-404d-b279-03f3265f1cf6] Running
	I1202 13:02:10.050510   61173 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653783" [c09effaa-0cea-47db-aca6-8f1d6612b194] Running
	I1202 13:02:10.050514   61173 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653783" [7efc2e68-5d67-4ee7-8b00-e23124acdf63] Running
	I1202 13:02:10.050518   61173 system_pods.go:89] "kube-proxy-d4vw4" [487da76d-2fae-4df0-b663-0cf128ae2911] Running
	I1202 13:02:10.050526   61173 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653783" [94e85eeb-5304-4258-b76b-ac8eb0461069] Running
	I1202 13:02:10.050532   61173 system_pods.go:89] "metrics-server-6867b74b74-tcr8r" [2f017719-26ad-44ca-a44a-e6c20cd6438c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 13:02:10.050540   61173 system_pods.go:89] "storage-provisioner" [8975d342-96fa-4173-b477-e25909ca76da] Running
	I1202 13:02:10.050547   61173 system_pods.go:126] duration metric: took 5.115018ms to wait for k8s-apps to be running ...
	I1202 13:02:10.050552   61173 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 13:02:10.050588   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 13:02:10.065454   61173 system_svc.go:56] duration metric: took 14.89671ms WaitForService to wait for kubelet
	I1202 13:02:10.065475   61173 kubeadm.go:582] duration metric: took 8.335001135s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 13:02:10.065490   61173 node_conditions.go:102] verifying NodePressure condition ...
	I1202 13:02:10.199102   61173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 13:02:10.199123   61173 node_conditions.go:123] node cpu capacity is 2
	I1202 13:02:10.199136   61173 node_conditions.go:105] duration metric: took 133.639645ms to run NodePressure ...
	I1202 13:02:10.199148   61173 start.go:241] waiting for startup goroutines ...
	I1202 13:02:10.199156   61173 start.go:246] waiting for cluster config update ...
	I1202 13:02:10.199167   61173 start.go:255] writing updated cluster config ...
	I1202 13:02:10.199421   61173 ssh_runner.go:195] Run: rm -f paused
	I1202 13:02:10.246194   61173 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 13:02:10.248146   61173 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653783" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.183377286Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e0c69bb-90bb-4268-b43b-dd7cfcdc011e name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.183587793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733143839653296037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe13c277f44c13f62eb843afd2db76c7c0876400f205380144f78aa60c5620c,PodSandboxId:ea70db85389dfcf194d3f477d2cc219dc2c8c1c2f156f85fb68dbd1022178a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733143818678816326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec4496d6-f7d8-49db-9c91-99516b484a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35,PodSandboxId:4f7cd59c9e868cc8b35b8fcb5976711dae2117c905fdb34bd96e3d5ab08fea70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733143816511836511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvfc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88088d1-7d48-498a-8251-f3a9ff436583,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733143808956506822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85,PodSandboxId:e275084c32adb91a4b8be9593d71fdf31e183ea10b206f24305395b0578054e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733143808797931840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2xf6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 477778b7-12f0-4055-a583-edbf84c1a6
35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac,PodSandboxId:82bddb51e45f22fb39928422acac285ce825922d9db70813e8268bcbaee1aef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733143804128050556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 855950d9f38a59d78035922ca1f3f8e6,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4,PodSandboxId:d33a23bb21be2848996924d4d742ce9839e14f9fb871b3e33b534af1e012cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733143804074149085,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2956692446e925286f1f6deecc6075de,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7,PodSandboxId:1e4aaaa1c5f787068a3733dc3c7bceffbaa8c4c11d449fc14a7edf58242265d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733143804055047814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3047d2cbb0870e4faeaf39a24d235d8,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14,PodSandboxId:420a4aaa23c692127f204cb4a4ac8cab87b7a1bb252e0266b3e06e055eab2183,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733143804047118684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590f19d283bc4650c93f732fced32457,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e0c69bb-90bb-4268-b43b-dd7cfcdc011e name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.220494367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1aff99f-3b59-4424-a54b-b19db3fc26b6 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.220571665Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1aff99f-3b59-4424-a54b-b19db3fc26b6 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.222335739Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccf581a5-3331-497b-8443-c18327cafbc9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.222643518Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144614222622786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccf581a5-3331-497b-8443-c18327cafbc9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.223167703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=acdc1076-9306-4693-a352-2a29770b71e7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.223217574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=acdc1076-9306-4693-a352-2a29770b71e7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.224271397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733143839653296037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe13c277f44c13f62eb843afd2db76c7c0876400f205380144f78aa60c5620c,PodSandboxId:ea70db85389dfcf194d3f477d2cc219dc2c8c1c2f156f85fb68dbd1022178a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733143818678816326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec4496d6-f7d8-49db-9c91-99516b484a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35,PodSandboxId:4f7cd59c9e868cc8b35b8fcb5976711dae2117c905fdb34bd96e3d5ab08fea70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733143816511836511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvfc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88088d1-7d48-498a-8251-f3a9ff436583,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733143808956506822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85,PodSandboxId:e275084c32adb91a4b8be9593d71fdf31e183ea10b206f24305395b0578054e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733143808797931840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2xf6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 477778b7-12f0-4055-a583-edbf84c1a6
35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac,PodSandboxId:82bddb51e45f22fb39928422acac285ce825922d9db70813e8268bcbaee1aef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733143804128050556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 855950d9f38a59d78035922ca1f3f8e6,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4,PodSandboxId:d33a23bb21be2848996924d4d742ce9839e14f9fb871b3e33b534af1e012cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733143804074149085,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2956692446e925286f1f6deecc6075de,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7,PodSandboxId:1e4aaaa1c5f787068a3733dc3c7bceffbaa8c4c11d449fc14a7edf58242265d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733143804055047814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3047d2cbb0870e4faeaf39a24d235d8,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14,PodSandboxId:420a4aaa23c692127f204cb4a4ac8cab87b7a1bb252e0266b3e06e055eab2183,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733143804047118684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590f19d283bc4650c93f732fced32457,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=acdc1076-9306-4693-a352-2a29770b71e7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.263288392Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb56d32b-6647-49d7-9ce4-2706fa6783fa name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.263374592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb56d32b-6647-49d7-9ce4-2706fa6783fa name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.265670315Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2c58ab9-0980-4215-8924-1a51e4cdd727 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.266159496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144614266138658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2c58ab9-0980-4215-8924-1a51e4cdd727 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.266918917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5318fde6-2923-475b-ae07-9e3d4e5fa0be name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.266970714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5318fde6-2923-475b-ae07-9e3d4e5fa0be name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.267192476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733143839653296037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe13c277f44c13f62eb843afd2db76c7c0876400f205380144f78aa60c5620c,PodSandboxId:ea70db85389dfcf194d3f477d2cc219dc2c8c1c2f156f85fb68dbd1022178a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733143818678816326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec4496d6-f7d8-49db-9c91-99516b484a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35,PodSandboxId:4f7cd59c9e868cc8b35b8fcb5976711dae2117c905fdb34bd96e3d5ab08fea70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733143816511836511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvfc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88088d1-7d48-498a-8251-f3a9ff436583,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733143808956506822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85,PodSandboxId:e275084c32adb91a4b8be9593d71fdf31e183ea10b206f24305395b0578054e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733143808797931840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2xf6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 477778b7-12f0-4055-a583-edbf84c1a6
35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac,PodSandboxId:82bddb51e45f22fb39928422acac285ce825922d9db70813e8268bcbaee1aef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733143804128050556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 855950d9f38a59d78035922ca1f3f8e6,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4,PodSandboxId:d33a23bb21be2848996924d4d742ce9839e14f9fb871b3e33b534af1e012cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733143804074149085,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2956692446e925286f1f6deecc6075de,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7,PodSandboxId:1e4aaaa1c5f787068a3733dc3c7bceffbaa8c4c11d449fc14a7edf58242265d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733143804055047814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3047d2cbb0870e4faeaf39a24d235d8,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14,PodSandboxId:420a4aaa23c692127f204cb4a4ac8cab87b7a1bb252e0266b3e06e055eab2183,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733143804047118684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590f19d283bc4650c93f732fced32457,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5318fde6-2923-475b-ae07-9e3d4e5fa0be name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.283438203Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=d78fc076-af4e-4a77-a4ed-02dc17552e13 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.283521847Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d78fc076-af4e-4a77-a4ed-02dc17552e13 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.301560552Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=876d0686-8fa3-41d5-8bbc-dc1b4b0c3f05 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.301631450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=876d0686-8fa3-41d5-8bbc-dc1b4b0c3f05 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.302484244Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15d44608-afae-427d-ba28-7f8efcfe14cb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.302928131Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144614302906519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15d44608-afae-427d-ba28-7f8efcfe14cb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.303318408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef340b2f-0a7b-4aa9-95a9-25cf7e2fd86f name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.303386295Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef340b2f-0a7b-4aa9-95a9-25cf7e2fd86f name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:03:34 no-preload-658679 crio[712]: time="2024-12-02 13:03:34.303567699Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733143839653296037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe13c277f44c13f62eb843afd2db76c7c0876400f205380144f78aa60c5620c,PodSandboxId:ea70db85389dfcf194d3f477d2cc219dc2c8c1c2f156f85fb68dbd1022178a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733143818678816326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec4496d6-f7d8-49db-9c91-99516b484a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35,PodSandboxId:4f7cd59c9e868cc8b35b8fcb5976711dae2117c905fdb34bd96e3d5ab08fea70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733143816511836511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvfc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88088d1-7d48-498a-8251-f3a9ff436583,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733143808956506822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85,PodSandboxId:e275084c32adb91a4b8be9593d71fdf31e183ea10b206f24305395b0578054e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733143808797931840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2xf6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 477778b7-12f0-4055-a583-edbf84c1a6
35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac,PodSandboxId:82bddb51e45f22fb39928422acac285ce825922d9db70813e8268bcbaee1aef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733143804128050556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 855950d9f38a59d78035922ca1f3f8e6,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4,PodSandboxId:d33a23bb21be2848996924d4d742ce9839e14f9fb871b3e33b534af1e012cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733143804074149085,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2956692446e925286f1f6deecc6075de,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7,PodSandboxId:1e4aaaa1c5f787068a3733dc3c7bceffbaa8c4c11d449fc14a7edf58242265d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733143804055047814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3047d2cbb0870e4faeaf39a24d235d8,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14,PodSandboxId:420a4aaa23c692127f204cb4a4ac8cab87b7a1bb252e0266b3e06e055eab2183,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733143804047118684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590f19d283bc4650c93f732fced32457,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef340b2f-0a7b-4aa9-95a9-25cf7e2fd86f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b120e768c4ec7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   a84cbf3acc7fe       storage-provisioner
	7fe13c277f44c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   ea70db85389df       busybox
	7db2e67ce7bdd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   4f7cd59c9e868       coredns-7c65d6cfc9-cvfc9
	ff4595631eef7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   a84cbf3acc7fe       storage-provisioner
	15d09a46ff041       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   e275084c32adb       kube-proxy-2xf6j
	460259371c977       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   82bddb51e45f2       etcd-no-preload-658679
	0c490584031d2       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   d33a23bb21be2       kube-scheduler-no-preload-658679
	d8d62b779a876       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   1e4aaaa1c5f78       kube-apiserver-no-preload-658679
	316b371ddf0b0       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   420a4aaa23c69       kube-controller-manager-no-preload-658679
	
	
	==> coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44557 - 28529 "HINFO IN 3661014269720602643.8108251855968392496. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008212656s
	
	
	==> describe nodes <==
	Name:               no-preload-658679
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-658679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=no-preload-658679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T12_40_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 12:40:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-658679
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 13:03:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 13:00:51 +0000   Mon, 02 Dec 2024 12:40:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 13:00:51 +0000   Mon, 02 Dec 2024 12:40:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 13:00:51 +0000   Mon, 02 Dec 2024 12:40:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 13:00:51 +0000   Mon, 02 Dec 2024 12:50:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.205
	  Hostname:    no-preload-658679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2135076092b403ab0b57f9cee8abe8c
	  System UUID:                b2135076-092b-403a-b0b5-7f9cee8abe8c
	  Boot ID:                    059e703a-4f31-4023-a8da-070b32d9c155
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-cvfc9                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-658679                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-658679             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-658679    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-2xf6j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-658679             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-sn7tq              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-658679 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-658679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-658679 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-658679 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-658679 event: Registered Node no-preload-658679 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-658679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-658679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-658679 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-658679 event: Registered Node no-preload-658679 in Controller
	
	
	==> dmesg <==
	[Dec 2 12:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060264] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047306] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.213302] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.819622] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.653615] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.564792] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.057247] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052124] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.168963] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.142963] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.275481] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[Dec 2 12:50] systemd-fstab-generator[1313]: Ignoring "noauto" option for root device
	[  +0.062218] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.749457] systemd-fstab-generator[1436]: Ignoring "noauto" option for root device
	[  +4.736730] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.366144] systemd-fstab-generator[2084]: Ignoring "noauto" option for root device
	[  +3.217855] kauditd_printk_skb: 61 callbacks suppressed
	[ +25.223422] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] <==
	{"level":"info","ts":"2024-12-02T12:50:04.672351Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.205:2380"}
	{"level":"info","ts":"2024-12-02T12:50:04.672431Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.205:2380"}
	{"level":"info","ts":"2024-12-02T12:50:04.672682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 switched to configuration voters=(5855946521106091300)"}
	{"level":"info","ts":"2024-12-02T12:50:04.673554Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2835eac8f11eb509","local-member-id":"51448055b6368d24","added-peer-id":"51448055b6368d24","added-peer-peer-urls":["https://192.168.61.205:2380"]}
	{"level":"info","ts":"2024-12-02T12:50:04.674057Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2835eac8f11eb509","local-member-id":"51448055b6368d24","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:50:04.674209Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:50:06.485363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-02T12:50:06.485432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-02T12:50:06.485455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 received MsgPreVoteResp from 51448055b6368d24 at term 2"}
	{"level":"info","ts":"2024-12-02T12:50:06.485471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 became candidate at term 3"}
	{"level":"info","ts":"2024-12-02T12:50:06.485477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 received MsgVoteResp from 51448055b6368d24 at term 3"}
	{"level":"info","ts":"2024-12-02T12:50:06.485490Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 became leader at term 3"}
	{"level":"info","ts":"2024-12-02T12:50:06.485498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 51448055b6368d24 elected leader 51448055b6368d24 at term 3"}
	{"level":"info","ts":"2024-12-02T12:50:06.497974Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T12:50:06.498257Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T12:50:06.497988Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"51448055b6368d24","local-member-attributes":"{Name:no-preload-658679 ClientURLs:[https://192.168.61.205:2379]}","request-path":"/0/members/51448055b6368d24/attributes","cluster-id":"2835eac8f11eb509","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-02T12:50:06.498644Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-02T12:50:06.498700Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-02T12:50:06.499188Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-02T12:50:06.499436Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-02T12:50:06.500004Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.205:2379"}
	{"level":"info","ts":"2024-12-02T12:50:06.500622Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-02T13:00:06.526581Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":819}
	{"level":"info","ts":"2024-12-02T13:00:06.536146Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":819,"took":"8.756355ms","hash":3605462655,"current-db-size-bytes":2715648,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2715648,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-12-02T13:00:06.536231Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3605462655,"revision":819,"compact-revision":-1}
	
	
	==> kernel <==
	 13:03:34 up 14 min,  0 users,  load average: 0.32, 0.17, 0.11
	Linux no-preload-658679 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] <==
	W1202 13:00:08.817135       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:00:08.817399       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 13:00:08.818417       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:00:08.818481       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 13:01:08.819397       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:01:08.819564       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1202 13:01:08.819594       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:01:08.819658       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 13:01:08.820790       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:01:08.820859       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 13:03:08.821662       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:03:08.821797       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1202 13:03:08.821700       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:03:08.821871       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 13:03:08.823037       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:03:08.823102       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] <==
	E1202 12:58:11.448894       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 12:58:11.909250       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 12:58:41.454945       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 12:58:41.917401       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 12:59:11.460557       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 12:59:11.928701       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 12:59:41.467051       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 12:59:41.940050       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:00:11.475294       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:00:11.949601       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:00:41.482472       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:00:41.957594       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:00:51.747120       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-658679"
	E1202 13:01:11.488491       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:01:11.966918       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:01:26.450156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="250.993µs"
	I1202 13:01:37.449035       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="143.303µs"
	E1202 13:01:41.494116       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:01:41.974192       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:02:11.501367       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:02:11.982712       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:02:41.507478       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:02:41.990731       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:03:11.513380       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:03:11.999222       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1202 12:50:09.088670       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1202 12:50:09.097673       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.205"]
	E1202 12:50:09.097840       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 12:50:09.138251       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1202 12:50:09.138384       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 12:50:09.138438       1 server_linux.go:169] "Using iptables Proxier"
	I1202 12:50:09.142258       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 12:50:09.142561       1 server.go:483] "Version info" version="v1.31.2"
	I1202 12:50:09.142711       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 12:50:09.144695       1 config.go:199] "Starting service config controller"
	I1202 12:50:09.144740       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 12:50:09.144843       1 config.go:105] "Starting endpoint slice config controller"
	I1202 12:50:09.144861       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 12:50:09.145310       1 config.go:328] "Starting node config controller"
	I1202 12:50:09.146502       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 12:50:09.245215       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 12:50:09.245241       1 shared_informer.go:320] Caches are synced for service config
	I1202 12:50:09.246735       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] <==
	I1202 12:50:05.351873       1 serving.go:386] Generated self-signed cert in-memory
	W1202 12:50:07.733726       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 12:50:07.733822       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 12:50:07.733832       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 12:50:07.733843       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 12:50:07.824166       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1202 12:50:07.826839       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 12:50:07.833097       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1202 12:50:07.834877       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 12:50:07.834926       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1202 12:50:07.835042       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 12:50:07.936050       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 13:02:23 no-preload-658679 kubelet[1443]: E1202 13:02:23.607963    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144543607507975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:32 no-preload-658679 kubelet[1443]: E1202 13:02:32.431329    1443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sn7tq" podUID="8171d626-7036-4585-a967-8ff54f00cfc8"
	Dec 02 13:02:33 no-preload-658679 kubelet[1443]: E1202 13:02:33.609360    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144553609026202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:33 no-preload-658679 kubelet[1443]: E1202 13:02:33.609405    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144553609026202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:43 no-preload-658679 kubelet[1443]: E1202 13:02:43.610608    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144563610230872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:43 no-preload-658679 kubelet[1443]: E1202 13:02:43.611112    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144563610230872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:44 no-preload-658679 kubelet[1443]: E1202 13:02:44.432600    1443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sn7tq" podUID="8171d626-7036-4585-a967-8ff54f00cfc8"
	Dec 02 13:02:53 no-preload-658679 kubelet[1443]: E1202 13:02:53.613436    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144573612511489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:53 no-preload-658679 kubelet[1443]: E1202 13:02:53.613461    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144573612511489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:02:55 no-preload-658679 kubelet[1443]: E1202 13:02:55.436850    1443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sn7tq" podUID="8171d626-7036-4585-a967-8ff54f00cfc8"
	Dec 02 13:03:03 no-preload-658679 kubelet[1443]: E1202 13:03:03.451253    1443 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 02 13:03:03 no-preload-658679 kubelet[1443]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 13:03:03 no-preload-658679 kubelet[1443]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 13:03:03 no-preload-658679 kubelet[1443]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 13:03:03 no-preload-658679 kubelet[1443]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 13:03:03 no-preload-658679 kubelet[1443]: E1202 13:03:03.614635    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144583614350360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:03:03 no-preload-658679 kubelet[1443]: E1202 13:03:03.614658    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144583614350360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:03:10 no-preload-658679 kubelet[1443]: E1202 13:03:10.431817    1443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sn7tq" podUID="8171d626-7036-4585-a967-8ff54f00cfc8"
	Dec 02 13:03:13 no-preload-658679 kubelet[1443]: E1202 13:03:13.616280    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144593615850609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:03:13 no-preload-658679 kubelet[1443]: E1202 13:03:13.616341    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144593615850609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:03:23 no-preload-658679 kubelet[1443]: E1202 13:03:23.618047    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144603617511989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:03:23 no-preload-658679 kubelet[1443]: E1202 13:03:23.618089    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144603617511989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:03:25 no-preload-658679 kubelet[1443]: E1202 13:03:25.432058    1443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sn7tq" podUID="8171d626-7036-4585-a967-8ff54f00cfc8"
	Dec 02 13:03:33 no-preload-658679 kubelet[1443]: E1202 13:03:33.622175    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144613619949706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:03:33 no-preload-658679 kubelet[1443]: E1202 13:03:33.623707    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144613619949706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] <==
	I1202 12:50:39.733363       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 12:50:39.742561       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 12:50:39.742657       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 12:50:39.749385       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 12:50:39.749540       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-658679_aee4fca7-ecbd-474c-9f02-2e66a09e3bcf!
	I1202 12:50:39.750352       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a29abcb-c5c7-4502-917f-abd7d8e4a569", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-658679_aee4fca7-ecbd-474c-9f02-2e66a09e3bcf became leader
	I1202 12:50:39.850277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-658679_aee4fca7-ecbd-474c-9f02-2e66a09e3bcf!
	
	
	==> storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] <==
	I1202 12:50:09.073018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 12:50:39.076094       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-658679 -n no-preload-658679
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-658679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-sn7tq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-658679 describe pod metrics-server-6867b74b74-sn7tq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-658679 describe pod metrics-server-6867b74b74-sn7tq: exit status 1 (59.588235ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-sn7tq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-658679 describe pod metrics-server-6867b74b74-sn7tq: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
E1202 12:57:49.238268   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
E1202 13:00:01.369810   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
E1202 13:02:49.238233   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
E1202 13:05:01.370494   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
E1202 13:05:52.315913   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666766 -n old-k8s-version-666766
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 2 (233.861133ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-666766" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 2 (220.478981ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-666766 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p newest-cni-983490 --memory=2200 --alsologtostderr   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-953044            | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-983490             | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-983490                  | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-983490 --memory=2200 --alsologtostderr   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658679                  | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658679                                   | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-983490 image list                           | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	| delete  | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:49 UTC |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-666766        | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-953044                 | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666766             | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653783  | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC | 02 Dec 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC |                     |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653783       | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC | 02 Dec 24 13:02 UTC |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 12:51:53
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 12:51:53.986642   61173 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:51:53.986878   61173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:51:53.986887   61173 out.go:358] Setting ErrFile to fd 2...
	I1202 12:51:53.986891   61173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:51:53.987040   61173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:51:53.987531   61173 out.go:352] Setting JSON to false
	I1202 12:51:53.988496   61173 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5666,"bootTime":1733138248,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:51:53.988587   61173 start.go:139] virtualization: kvm guest
	I1202 12:51:53.990552   61173 out.go:177] * [default-k8s-diff-port-653783] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:51:53.991681   61173 notify.go:220] Checking for updates...
	I1202 12:51:53.991692   61173 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:51:53.992827   61173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:51:53.993900   61173 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:51:53.995110   61173 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:51:53.996273   61173 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:51:53.997326   61173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:51:53.998910   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:51:53.999556   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:53.999630   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.014837   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I1202 12:51:54.015203   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.015691   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.015717   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.016024   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.016213   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.016420   61173 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:51:54.016702   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:54.016740   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.031103   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43443
	I1202 12:51:54.031480   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.031846   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.031862   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.032152   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.032313   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.066052   61173 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 12:51:54.067269   61173 start.go:297] selected driver: kvm2
	I1202 12:51:54.067282   61173 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:51:54.067398   61173 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:51:54.068083   61173 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:51:54.068159   61173 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 12:51:54.082839   61173 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 12:51:54.083361   61173 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:51:54.083405   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:51:54.083450   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:51:54.083491   61173 start.go:340] cluster config:
	{Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:51:54.083581   61173 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:51:54.085236   61173 out.go:177] * Starting "default-k8s-diff-port-653783" primary control-plane node in "default-k8s-diff-port-653783" cluster
	I1202 12:51:54.086247   61173 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:51:54.086275   61173 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 12:51:54.086281   61173 cache.go:56] Caching tarball of preloaded images
	I1202 12:51:54.086363   61173 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 12:51:54.086377   61173 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 12:51:54.086471   61173 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/config.json ...
	I1202 12:51:54.086683   61173 start.go:360] acquireMachinesLock for default-k8s-diff-port-653783: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:51:54.086721   61173 start.go:364] duration metric: took 21.68µs to acquireMachinesLock for "default-k8s-diff-port-653783"
	I1202 12:51:54.086742   61173 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:51:54.086750   61173 fix.go:54] fixHost starting: 
	I1202 12:51:54.087016   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:54.087049   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.100439   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I1202 12:51:54.100860   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.101284   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.101305   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.101699   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.101899   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.102027   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 12:51:54.103398   61173 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653783: state=Running err=<nil>
	W1202 12:51:54.103428   61173 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:51:54.104862   61173 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-653783" VM ...
	I1202 12:51:51.250214   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:53.251543   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:55.251624   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:54.384562   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:54.397979   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:54.398032   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:54.431942   59162 cri.go:89] found id: ""
	I1202 12:51:54.431965   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.431973   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:54.431979   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:54.432024   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:54.466033   59162 cri.go:89] found id: ""
	I1202 12:51:54.466054   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.466062   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:54.466067   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:54.466116   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:54.506462   59162 cri.go:89] found id: ""
	I1202 12:51:54.506486   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.506493   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:54.506499   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:54.506545   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:54.539966   59162 cri.go:89] found id: ""
	I1202 12:51:54.539996   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.540006   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:54.540013   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:54.540068   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:54.572987   59162 cri.go:89] found id: ""
	I1202 12:51:54.573027   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.573038   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:54.573046   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:54.573107   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:54.609495   59162 cri.go:89] found id: ""
	I1202 12:51:54.609528   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.609539   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:54.609547   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:54.609593   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:54.643109   59162 cri.go:89] found id: ""
	I1202 12:51:54.643136   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.643148   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:54.643205   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:54.643279   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:54.681113   59162 cri.go:89] found id: ""
	I1202 12:51:54.681151   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.681160   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:54.681168   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:54.681180   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:54.734777   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:54.734806   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:54.748171   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:54.748196   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:54.821609   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:54.821628   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:54.821642   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:54.900306   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:54.900339   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:57.438971   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:57.454128   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:57.454187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:57.489852   59162 cri.go:89] found id: ""
	I1202 12:51:57.489877   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.489885   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:57.489890   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:57.489938   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:57.523496   59162 cri.go:89] found id: ""
	I1202 12:51:57.523515   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.523522   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:57.523528   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:57.523576   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:57.554394   59162 cri.go:89] found id: ""
	I1202 12:51:57.554417   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.554429   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:57.554436   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:57.554497   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:57.586259   59162 cri.go:89] found id: ""
	I1202 12:51:57.586281   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.586291   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:57.586298   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:57.586353   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:57.618406   59162 cri.go:89] found id: ""
	I1202 12:51:57.618427   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.618435   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:57.618440   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:57.618482   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:57.649491   59162 cri.go:89] found id: ""
	I1202 12:51:57.649517   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.649527   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:57.649532   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:57.649575   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:57.682286   59162 cri.go:89] found id: ""
	I1202 12:51:57.682306   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.682313   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:57.682319   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:57.682364   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:57.720929   59162 cri.go:89] found id: ""
	I1202 12:51:57.720956   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.720967   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:57.720977   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:57.720987   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:57.802270   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:57.802302   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:57.841214   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:57.841246   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:57.893691   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:57.893724   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:57.906616   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:57.906640   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:57.973328   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:54.153852   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:56.653113   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:54.105934   61173 machine.go:93] provisionDockerMachine start ...
	I1202 12:51:54.105950   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.106120   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:51:54.108454   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:51:54.108866   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:48:33 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:51:54.108899   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:51:54.109032   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:51:54.109170   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:51:54.109328   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:51:54.109487   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:51:54.109662   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:51:54.109863   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:51:54.109875   61173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:51:57.012461   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:51:57.751276   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.250936   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.473500   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:00.487912   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:00.487973   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:00.526513   59162 cri.go:89] found id: ""
	I1202 12:52:00.526539   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.526548   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:00.526557   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:00.526620   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:00.561483   59162 cri.go:89] found id: ""
	I1202 12:52:00.561511   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.561519   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:00.561526   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:00.561583   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:00.592435   59162 cri.go:89] found id: ""
	I1202 12:52:00.592473   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.592484   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:00.592491   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:00.592551   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:00.624686   59162 cri.go:89] found id: ""
	I1202 12:52:00.624710   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.624722   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:00.624727   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:00.624771   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:00.662610   59162 cri.go:89] found id: ""
	I1202 12:52:00.662639   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.662650   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:00.662657   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:00.662721   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:00.695972   59162 cri.go:89] found id: ""
	I1202 12:52:00.695993   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.696000   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:00.696006   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:00.696048   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:00.727200   59162 cri.go:89] found id: ""
	I1202 12:52:00.727230   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.727253   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:00.727261   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:00.727316   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:00.761510   59162 cri.go:89] found id: ""
	I1202 12:52:00.761536   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.761545   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:00.761556   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:00.761568   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:00.812287   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:00.812318   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:00.825282   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:00.825309   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:00.894016   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:00.894042   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:00.894065   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:00.972001   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:00.972034   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:59.152373   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:01.153532   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:03.653266   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.084529   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:02.751465   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:04.752349   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:03.512982   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:03.528814   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:03.528884   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:03.564137   59162 cri.go:89] found id: ""
	I1202 12:52:03.564159   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.564166   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:03.564173   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:03.564223   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:03.608780   59162 cri.go:89] found id: ""
	I1202 12:52:03.608811   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.608822   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:03.608829   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:03.608891   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:03.644906   59162 cri.go:89] found id: ""
	I1202 12:52:03.644943   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.644954   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:03.644978   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:03.645052   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:03.676732   59162 cri.go:89] found id: ""
	I1202 12:52:03.676754   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.676761   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:03.676767   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:03.676809   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:03.711338   59162 cri.go:89] found id: ""
	I1202 12:52:03.711362   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.711369   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:03.711375   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:03.711424   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:03.743657   59162 cri.go:89] found id: ""
	I1202 12:52:03.743682   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.743689   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:03.743694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:03.743737   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:03.777740   59162 cri.go:89] found id: ""
	I1202 12:52:03.777759   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.777766   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:03.777772   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:03.777818   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:03.811145   59162 cri.go:89] found id: ""
	I1202 12:52:03.811169   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.811179   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:03.811190   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:03.811204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:03.862069   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:03.862093   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:03.875133   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:03.875164   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:03.947077   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:03.947102   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:03.947114   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:04.023458   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:04.023487   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:06.562323   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:06.577498   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:06.577556   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:06.613937   59162 cri.go:89] found id: ""
	I1202 12:52:06.613962   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.613970   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:06.613976   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:06.614023   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:06.647630   59162 cri.go:89] found id: ""
	I1202 12:52:06.647655   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.647662   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:06.647667   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:06.647711   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:06.683758   59162 cri.go:89] found id: ""
	I1202 12:52:06.683783   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.683793   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:06.683800   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:06.683861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:06.722664   59162 cri.go:89] found id: ""
	I1202 12:52:06.722686   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.722694   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:06.722699   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:06.722747   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:06.756255   59162 cri.go:89] found id: ""
	I1202 12:52:06.756280   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.756290   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:06.756296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:06.756340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:06.792350   59162 cri.go:89] found id: ""
	I1202 12:52:06.792376   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.792387   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:06.792394   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:06.792450   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:06.827259   59162 cri.go:89] found id: ""
	I1202 12:52:06.827289   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.827301   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:06.827308   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:06.827367   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:06.858775   59162 cri.go:89] found id: ""
	I1202 12:52:06.858795   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.858802   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:06.858811   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:06.858821   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:06.911764   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:06.911795   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:06.925297   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:06.925326   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:06.993703   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:06.993730   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:06.993744   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:07.073657   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:07.073685   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:05.653526   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:08.152177   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:06.164438   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:07.251496   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.752479   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.611640   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:09.626141   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:09.626199   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:09.661406   59162 cri.go:89] found id: ""
	I1202 12:52:09.661425   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.661432   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:09.661439   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:09.661498   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:09.698145   59162 cri.go:89] found id: ""
	I1202 12:52:09.698173   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.698184   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:09.698191   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:09.698252   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:09.732150   59162 cri.go:89] found id: ""
	I1202 12:52:09.732178   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.732189   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:09.732197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:09.732261   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:09.768040   59162 cri.go:89] found id: ""
	I1202 12:52:09.768063   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.768070   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:09.768076   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:09.768130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:09.801038   59162 cri.go:89] found id: ""
	I1202 12:52:09.801064   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.801075   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:09.801082   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:09.801130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:09.841058   59162 cri.go:89] found id: ""
	I1202 12:52:09.841082   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.841089   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:09.841095   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:09.841137   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:09.885521   59162 cri.go:89] found id: ""
	I1202 12:52:09.885541   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.885548   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:09.885554   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:09.885602   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:09.924759   59162 cri.go:89] found id: ""
	I1202 12:52:09.924779   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.924786   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:09.924793   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:09.924804   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:09.968241   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:09.968273   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:10.020282   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:10.020315   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:10.036491   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:10.036519   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:10.113297   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:10.113324   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:10.113339   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:12.688410   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:12.705296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:12.705356   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:12.743097   59162 cri.go:89] found id: ""
	I1202 12:52:12.743119   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.743127   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:12.743133   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:12.743187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:12.778272   59162 cri.go:89] found id: ""
	I1202 12:52:12.778292   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.778299   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:12.778304   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:12.778365   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:12.816087   59162 cri.go:89] found id: ""
	I1202 12:52:12.816116   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.816127   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:12.816134   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:12.816187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:12.850192   59162 cri.go:89] found id: ""
	I1202 12:52:12.850214   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.850221   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:12.850227   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:12.850282   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:12.883325   59162 cri.go:89] found id: ""
	I1202 12:52:12.883351   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.883360   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:12.883367   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:12.883427   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:12.916121   59162 cri.go:89] found id: ""
	I1202 12:52:12.916157   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.916169   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:12.916176   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:12.916251   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:12.946704   59162 cri.go:89] found id: ""
	I1202 12:52:12.946733   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.946746   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:12.946753   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:12.946802   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:12.979010   59162 cri.go:89] found id: ""
	I1202 12:52:12.979041   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.979050   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:12.979062   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:12.979075   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:13.062141   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:13.062171   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:13.111866   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:13.111900   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:13.162470   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:13.162498   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:13.178497   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:13.178525   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:13.245199   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:10.152556   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:12.153087   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.236522   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:12.249938   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:14.750814   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:15.746327   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:15.760092   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:15.760160   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:15.797460   59162 cri.go:89] found id: ""
	I1202 12:52:15.797484   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.797495   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:15.797503   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:15.797563   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:15.829969   59162 cri.go:89] found id: ""
	I1202 12:52:15.829998   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.830009   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:15.830017   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:15.830072   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:15.862390   59162 cri.go:89] found id: ""
	I1202 12:52:15.862418   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.862428   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:15.862435   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:15.862484   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:15.895223   59162 cri.go:89] found id: ""
	I1202 12:52:15.895244   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.895251   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:15.895257   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:15.895311   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:15.933157   59162 cri.go:89] found id: ""
	I1202 12:52:15.933184   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.933192   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:15.933197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:15.933245   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:15.964387   59162 cri.go:89] found id: ""
	I1202 12:52:15.964414   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.964425   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:15.964433   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:15.964487   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:15.996803   59162 cri.go:89] found id: ""
	I1202 12:52:15.996825   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.996832   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:15.996837   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:15.996881   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:16.029364   59162 cri.go:89] found id: ""
	I1202 12:52:16.029394   59162 logs.go:282] 0 containers: []
	W1202 12:52:16.029402   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:16.029411   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:16.029422   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:16.098237   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:16.098264   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:16.098278   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:16.172386   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:16.172414   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:16.216899   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:16.216923   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:16.281565   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:16.281591   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:14.154258   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:16.652807   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:15.316450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:18.388460   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:16.751794   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:19.250295   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:18.796337   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:18.809573   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:18.809637   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:18.847965   59162 cri.go:89] found id: ""
	I1202 12:52:18.847991   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.847999   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:18.848004   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:18.848053   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:18.883714   59162 cri.go:89] found id: ""
	I1202 12:52:18.883741   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.883751   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:18.883758   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:18.883817   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:18.918581   59162 cri.go:89] found id: ""
	I1202 12:52:18.918605   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.918612   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:18.918617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:18.918672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:18.954394   59162 cri.go:89] found id: ""
	I1202 12:52:18.954426   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.954437   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:18.954443   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:18.954502   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:18.995321   59162 cri.go:89] found id: ""
	I1202 12:52:18.995347   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.995355   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:18.995361   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:18.995423   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:19.034030   59162 cri.go:89] found id: ""
	I1202 12:52:19.034055   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.034066   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:19.034073   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:19.034130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:19.073569   59162 cri.go:89] found id: ""
	I1202 12:52:19.073597   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.073609   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:19.073615   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:19.073662   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:19.112049   59162 cri.go:89] found id: ""
	I1202 12:52:19.112078   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.112090   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:19.112100   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:19.112113   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:19.180480   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:19.180502   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:19.180516   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:19.258236   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:19.258264   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:19.299035   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:19.299053   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:19.352572   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:19.352602   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:21.866524   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:21.879286   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:21.879340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:21.910463   59162 cri.go:89] found id: ""
	I1202 12:52:21.910489   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.910498   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:21.910504   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:21.910551   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:21.943130   59162 cri.go:89] found id: ""
	I1202 12:52:21.943157   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.943165   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:21.943171   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:21.943216   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:21.976969   59162 cri.go:89] found id: ""
	I1202 12:52:21.976990   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.976997   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:21.977002   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:21.977055   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:22.022113   59162 cri.go:89] found id: ""
	I1202 12:52:22.022144   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.022153   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:22.022159   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:22.022218   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:22.057387   59162 cri.go:89] found id: ""
	I1202 12:52:22.057406   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.057413   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:22.057418   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:22.057459   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:22.089832   59162 cri.go:89] found id: ""
	I1202 12:52:22.089866   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.089892   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:22.089900   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:22.089960   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:22.121703   59162 cri.go:89] found id: ""
	I1202 12:52:22.121727   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.121735   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:22.121740   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:22.121789   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:22.155076   59162 cri.go:89] found id: ""
	I1202 12:52:22.155098   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.155108   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:22.155117   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:22.155137   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:22.234831   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:22.234862   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:22.273912   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:22.273945   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:22.327932   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:22.327966   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:22.340890   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:22.340913   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:22.419371   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:19.153845   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:21.652993   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:23.653111   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:21.750980   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:24.250791   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:24.919868   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:24.935004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:24.935068   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:24.972438   59162 cri.go:89] found id: ""
	I1202 12:52:24.972466   59162 logs.go:282] 0 containers: []
	W1202 12:52:24.972474   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:24.972480   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:24.972525   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:25.009282   59162 cri.go:89] found id: ""
	I1202 12:52:25.009310   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.009320   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:25.009329   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:25.009391   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:25.043227   59162 cri.go:89] found id: ""
	I1202 12:52:25.043254   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.043262   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:25.043267   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:25.043318   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:25.079167   59162 cri.go:89] found id: ""
	I1202 12:52:25.079191   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.079198   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:25.079204   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:25.079263   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:25.110308   59162 cri.go:89] found id: ""
	I1202 12:52:25.110332   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.110340   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:25.110346   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:25.110388   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:25.143804   59162 cri.go:89] found id: ""
	I1202 12:52:25.143830   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.143840   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:25.143846   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:25.143903   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:25.178114   59162 cri.go:89] found id: ""
	I1202 12:52:25.178140   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.178147   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:25.178155   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:25.178204   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:25.212632   59162 cri.go:89] found id: ""
	I1202 12:52:25.212665   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.212675   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:25.212684   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:25.212696   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:25.267733   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:25.267761   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:25.281025   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:25.281048   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:25.346497   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:25.346520   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:25.346531   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:25.437435   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:25.437469   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:27.979493   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:27.993542   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:27.993615   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:28.030681   59162 cri.go:89] found id: ""
	I1202 12:52:28.030705   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.030712   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:28.030718   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:28.030771   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:28.063991   59162 cri.go:89] found id: ""
	I1202 12:52:28.064019   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.064027   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:28.064032   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:28.064080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:28.097983   59162 cri.go:89] found id: ""
	I1202 12:52:28.098018   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.098029   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:28.098038   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:28.098098   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:28.131956   59162 cri.go:89] found id: ""
	I1202 12:52:28.131977   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.131987   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:28.131995   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:28.132071   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:28.170124   59162 cri.go:89] found id: ""
	I1202 12:52:28.170160   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.170171   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:28.170177   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:28.170238   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:28.203127   59162 cri.go:89] found id: ""
	I1202 12:52:28.203149   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.203157   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:28.203163   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:28.203216   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:28.240056   59162 cri.go:89] found id: ""
	I1202 12:52:28.240081   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.240088   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:28.240094   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:28.240142   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:28.276673   59162 cri.go:89] found id: ""
	I1202 12:52:28.276699   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.276710   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:28.276720   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:28.276733   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:28.333435   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:28.333470   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:28.347465   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:28.347491   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:52:26.153244   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:28.153689   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:27.508437   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:26.250897   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:28.250951   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:30.252183   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	W1202 12:52:28.432745   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:28.432777   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:28.432792   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:28.515984   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:28.516017   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:31.057069   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:31.070021   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:31.070084   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:31.106501   59162 cri.go:89] found id: ""
	I1202 12:52:31.106530   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.106540   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:31.106547   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:31.106606   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:31.141190   59162 cri.go:89] found id: ""
	I1202 12:52:31.141219   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.141230   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:31.141238   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:31.141298   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:31.176050   59162 cri.go:89] found id: ""
	I1202 12:52:31.176077   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.176087   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:31.176099   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:31.176169   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:31.211740   59162 cri.go:89] found id: ""
	I1202 12:52:31.211769   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.211780   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:31.211786   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:31.211831   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:31.248949   59162 cri.go:89] found id: ""
	I1202 12:52:31.248974   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.248983   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:31.248990   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:31.249044   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:31.284687   59162 cri.go:89] found id: ""
	I1202 12:52:31.284709   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.284717   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:31.284723   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:31.284765   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:31.317972   59162 cri.go:89] found id: ""
	I1202 12:52:31.317997   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.318004   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:31.318010   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:31.318065   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:31.354866   59162 cri.go:89] found id: ""
	I1202 12:52:31.354893   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.354904   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:31.354914   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:31.354927   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:31.425168   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:31.425191   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:31.425202   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:31.508169   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:31.508204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:31.547193   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:31.547220   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:31.601864   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:31.601892   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:30.653415   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:33.153132   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:30.580471   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:32.752026   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:35.251960   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:34.115652   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:34.131644   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:34.131695   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:34.174473   59162 cri.go:89] found id: ""
	I1202 12:52:34.174500   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.174510   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:34.174518   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:34.174571   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:34.226162   59162 cri.go:89] found id: ""
	I1202 12:52:34.226190   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.226201   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:34.226208   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:34.226271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:34.269202   59162 cri.go:89] found id: ""
	I1202 12:52:34.269230   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.269240   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:34.269248   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:34.269327   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:34.304571   59162 cri.go:89] found id: ""
	I1202 12:52:34.304604   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.304615   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:34.304621   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:34.304670   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:34.339285   59162 cri.go:89] found id: ""
	I1202 12:52:34.339316   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.339327   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:34.339334   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:34.339401   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:34.374919   59162 cri.go:89] found id: ""
	I1202 12:52:34.374952   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.374964   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:34.374973   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:34.375035   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:34.409292   59162 cri.go:89] found id: ""
	I1202 12:52:34.409319   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.409330   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:34.409337   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:34.409404   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:34.442536   59162 cri.go:89] found id: ""
	I1202 12:52:34.442561   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.442568   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:34.442576   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:34.442587   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:34.494551   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:34.494582   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:34.508684   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:34.508713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:34.572790   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:34.572816   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:34.572835   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:34.649327   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:34.649358   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:37.190648   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:37.203913   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:37.203966   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:37.243165   59162 cri.go:89] found id: ""
	I1202 12:52:37.243186   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.243194   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:37.243199   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:37.243246   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:37.279317   59162 cri.go:89] found id: ""
	I1202 12:52:37.279343   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.279351   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:37.279356   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:37.279411   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:37.312655   59162 cri.go:89] found id: ""
	I1202 12:52:37.312684   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.312693   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:37.312702   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:37.312748   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:37.346291   59162 cri.go:89] found id: ""
	I1202 12:52:37.346319   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.346328   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:37.346334   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:37.346382   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:37.381534   59162 cri.go:89] found id: ""
	I1202 12:52:37.381555   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.381563   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:37.381569   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:37.381621   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:37.416990   59162 cri.go:89] found id: ""
	I1202 12:52:37.417013   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.417020   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:37.417026   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:37.417083   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:37.451149   59162 cri.go:89] found id: ""
	I1202 12:52:37.451174   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.451182   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:37.451187   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:37.451233   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:37.485902   59162 cri.go:89] found id: ""
	I1202 12:52:37.485929   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.485940   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:37.485950   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:37.485970   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:37.541615   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:37.541645   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:37.554846   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:37.554866   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:37.622432   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:37.622457   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:37.622471   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:37.708793   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:37.708832   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:35.154170   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:37.653220   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:36.660437   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:37.751726   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:40.252016   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:40.246822   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:40.260893   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:40.260959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:40.294743   59162 cri.go:89] found id: ""
	I1202 12:52:40.294773   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.294782   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:40.294789   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:40.294845   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:40.338523   59162 cri.go:89] found id: ""
	I1202 12:52:40.338557   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.338570   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:40.338577   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:40.338628   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:40.373134   59162 cri.go:89] found id: ""
	I1202 12:52:40.373162   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.373170   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:40.373176   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:40.373225   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:40.410197   59162 cri.go:89] found id: ""
	I1202 12:52:40.410233   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.410247   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:40.410256   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:40.410333   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:40.442497   59162 cri.go:89] found id: ""
	I1202 12:52:40.442521   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.442530   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:40.442536   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:40.442597   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:40.477835   59162 cri.go:89] found id: ""
	I1202 12:52:40.477863   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.477872   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:40.477879   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:40.477936   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:40.511523   59162 cri.go:89] found id: ""
	I1202 12:52:40.511547   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.511559   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:40.511567   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:40.511628   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:40.545902   59162 cri.go:89] found id: ""
	I1202 12:52:40.545928   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.545942   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:40.545962   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:40.545976   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:40.595638   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:40.595669   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:40.609023   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:40.609043   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:40.680826   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:40.680848   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:40.680866   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:40.756551   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:40.756579   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:43.295761   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:43.308764   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:43.308836   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:43.343229   59162 cri.go:89] found id: ""
	I1202 12:52:43.343258   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.343268   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:43.343276   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:43.343335   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:39.653604   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:42.152871   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:39.732455   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:42.750873   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:45.250740   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:43.376841   59162 cri.go:89] found id: ""
	I1202 12:52:43.376861   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.376868   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:43.376874   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:43.376918   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:43.415013   59162 cri.go:89] found id: ""
	I1202 12:52:43.415033   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.415041   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:43.415046   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:43.415094   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:43.451563   59162 cri.go:89] found id: ""
	I1202 12:52:43.451590   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.451601   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:43.451608   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:43.451658   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:43.492838   59162 cri.go:89] found id: ""
	I1202 12:52:43.492859   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.492867   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:43.492872   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:43.492934   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:43.531872   59162 cri.go:89] found id: ""
	I1202 12:52:43.531898   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.531908   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:43.531914   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:43.531957   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:43.566235   59162 cri.go:89] found id: ""
	I1202 12:52:43.566260   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.566270   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:43.566277   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:43.566332   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:43.601502   59162 cri.go:89] found id: ""
	I1202 12:52:43.601531   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.601542   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:43.601553   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:43.601567   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:43.650984   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:43.651012   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:43.664273   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:43.664296   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:43.735791   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:43.735819   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:43.735833   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:43.817824   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:43.817861   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:46.356130   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:46.368755   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:46.368835   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:46.404552   59162 cri.go:89] found id: ""
	I1202 12:52:46.404574   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.404582   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:46.404588   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:46.404640   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:46.438292   59162 cri.go:89] found id: ""
	I1202 12:52:46.438318   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.438329   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:46.438337   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:46.438397   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:46.471614   59162 cri.go:89] found id: ""
	I1202 12:52:46.471636   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.471643   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:46.471649   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:46.471752   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:46.502171   59162 cri.go:89] found id: ""
	I1202 12:52:46.502193   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.502201   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:46.502207   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:46.502250   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:46.533820   59162 cri.go:89] found id: ""
	I1202 12:52:46.533842   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.533851   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:46.533859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:46.533914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:46.566891   59162 cri.go:89] found id: ""
	I1202 12:52:46.566918   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.566928   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:46.566936   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:46.566980   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:46.599112   59162 cri.go:89] found id: ""
	I1202 12:52:46.599143   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.599154   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:46.599161   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:46.599215   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:46.630794   59162 cri.go:89] found id: ""
	I1202 12:52:46.630837   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.630849   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:46.630860   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:46.630876   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:46.644180   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:46.644210   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:46.705881   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:46.705921   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:46.705936   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:46.781327   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:46.781359   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:46.820042   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:46.820072   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:44.654330   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:47.152273   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:45.816427   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:48.884464   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:47.751118   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:49.752726   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:49.368930   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:49.381506   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:49.381556   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:49.417928   59162 cri.go:89] found id: ""
	I1202 12:52:49.417955   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.417965   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:49.417977   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:49.418034   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:49.450248   59162 cri.go:89] found id: ""
	I1202 12:52:49.450276   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.450286   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:49.450295   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:49.450366   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:49.484288   59162 cri.go:89] found id: ""
	I1202 12:52:49.484311   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.484318   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:49.484323   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:49.484372   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:49.518565   59162 cri.go:89] found id: ""
	I1202 12:52:49.518585   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.518595   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:49.518602   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:49.518650   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:49.552524   59162 cri.go:89] found id: ""
	I1202 12:52:49.552549   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.552556   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:49.552561   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:49.552609   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:49.586570   59162 cri.go:89] found id: ""
	I1202 12:52:49.586599   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.586610   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:49.586617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:49.586672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:49.622561   59162 cri.go:89] found id: ""
	I1202 12:52:49.622590   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.622601   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:49.622609   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:49.622666   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:49.659092   59162 cri.go:89] found id: ""
	I1202 12:52:49.659117   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.659129   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:49.659152   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:49.659170   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:49.672461   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:49.672491   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:49.738609   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:49.738637   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:49.738670   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:49.820458   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:49.820488   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:49.860240   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:49.860269   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:52.411571   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:52.425037   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:52.425106   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:52.458215   59162 cri.go:89] found id: ""
	I1202 12:52:52.458244   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.458255   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:52.458262   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:52.458316   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:52.491781   59162 cri.go:89] found id: ""
	I1202 12:52:52.491809   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.491820   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:52.491827   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:52.491879   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:52.528829   59162 cri.go:89] found id: ""
	I1202 12:52:52.528855   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.528864   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:52.528870   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:52.528914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:52.560930   59162 cri.go:89] found id: ""
	I1202 12:52:52.560957   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.560965   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:52.560971   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:52.561021   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:52.594102   59162 cri.go:89] found id: ""
	I1202 12:52:52.594139   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.594152   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:52.594160   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:52.594222   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:52.627428   59162 cri.go:89] found id: ""
	I1202 12:52:52.627452   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.627460   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:52.627465   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:52.627529   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:52.659143   59162 cri.go:89] found id: ""
	I1202 12:52:52.659167   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.659175   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:52.659180   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:52.659230   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:52.691603   59162 cri.go:89] found id: ""
	I1202 12:52:52.691625   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.691632   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:52.691640   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:52.691651   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:52.741989   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:52.742016   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:52.755769   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:52.755790   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:52.826397   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:52.826418   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:52.826431   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:52.904705   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:52.904734   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:49.653476   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:52.152372   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:51.755127   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:54.252182   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:55.449363   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:55.462294   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:55.462350   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:55.500829   59162 cri.go:89] found id: ""
	I1202 12:52:55.500856   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.500865   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:55.500871   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:55.500927   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:55.533890   59162 cri.go:89] found id: ""
	I1202 12:52:55.533920   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.533931   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:55.533942   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:55.533998   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:55.566686   59162 cri.go:89] found id: ""
	I1202 12:52:55.566715   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.566725   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:55.566736   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:55.566790   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:55.598330   59162 cri.go:89] found id: ""
	I1202 12:52:55.598357   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.598367   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:55.598374   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:55.598429   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:55.630648   59162 cri.go:89] found id: ""
	I1202 12:52:55.630676   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.630686   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:55.630694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:55.630755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:55.664611   59162 cri.go:89] found id: ""
	I1202 12:52:55.664633   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.664640   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:55.664645   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:55.664687   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:55.697762   59162 cri.go:89] found id: ""
	I1202 12:52:55.697789   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.697797   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:55.697803   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:55.697853   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:55.735239   59162 cri.go:89] found id: ""
	I1202 12:52:55.735263   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.735271   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:55.735279   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:55.735292   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:55.805187   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:55.805217   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:55.805233   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:55.888420   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:55.888452   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:55.927535   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:55.927561   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:55.976883   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:55.976909   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:54.152753   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:56.154364   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.654202   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:54.968436   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:58.036631   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:56.750816   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.752427   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.490700   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:58.504983   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:58.505053   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:58.541332   59162 cri.go:89] found id: ""
	I1202 12:52:58.541352   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.541359   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:58.541365   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:58.541409   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:58.579437   59162 cri.go:89] found id: ""
	I1202 12:52:58.579459   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.579466   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:58.579472   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:58.579521   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:58.617374   59162 cri.go:89] found id: ""
	I1202 12:52:58.617406   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.617417   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:58.617425   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:58.617486   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:58.653242   59162 cri.go:89] found id: ""
	I1202 12:52:58.653269   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.653280   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:58.653287   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:58.653345   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:58.686171   59162 cri.go:89] found id: ""
	I1202 12:52:58.686201   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.686210   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:58.686215   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:58.686262   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:58.719934   59162 cri.go:89] found id: ""
	I1202 12:52:58.719956   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.719966   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:58.719974   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:58.720030   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:58.759587   59162 cri.go:89] found id: ""
	I1202 12:52:58.759610   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.759619   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:58.759626   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:58.759678   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:58.790885   59162 cri.go:89] found id: ""
	I1202 12:52:58.790908   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.790915   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:58.790922   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:58.790934   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:58.840192   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:58.840220   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:58.853639   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:58.853663   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:58.924643   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:58.924669   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:58.924679   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:59.013916   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:59.013945   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:01.552305   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:01.565577   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:01.565642   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:01.598261   59162 cri.go:89] found id: ""
	I1202 12:53:01.598294   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.598304   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:01.598310   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:01.598377   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:01.631527   59162 cri.go:89] found id: ""
	I1202 12:53:01.631556   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.631565   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:01.631570   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:01.631631   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:01.670788   59162 cri.go:89] found id: ""
	I1202 12:53:01.670812   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.670820   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:01.670826   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:01.670880   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:01.708801   59162 cri.go:89] found id: ""
	I1202 12:53:01.708828   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.708838   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:01.708846   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:01.708914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:01.746053   59162 cri.go:89] found id: ""
	I1202 12:53:01.746074   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.746083   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:01.746120   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:01.746184   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:01.780873   59162 cri.go:89] found id: ""
	I1202 12:53:01.780894   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.780901   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:01.780907   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:01.780951   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:01.817234   59162 cri.go:89] found id: ""
	I1202 12:53:01.817259   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.817269   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:01.817276   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:01.817335   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:01.850277   59162 cri.go:89] found id: ""
	I1202 12:53:01.850302   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.850317   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:01.850327   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:01.850342   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:01.933014   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:01.933055   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:01.971533   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:01.971562   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:02.020280   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:02.020311   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:02.034786   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:02.034814   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:02.104013   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:01.152305   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:03.153925   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:01.250308   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:03.250937   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:05.751259   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:04.604595   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:04.618004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:04.618057   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:04.651388   59162 cri.go:89] found id: ""
	I1202 12:53:04.651414   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.651428   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:04.651436   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:04.651495   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:04.686973   59162 cri.go:89] found id: ""
	I1202 12:53:04.686998   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.687005   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:04.687019   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:04.687063   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:04.720630   59162 cri.go:89] found id: ""
	I1202 12:53:04.720654   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.720661   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:04.720667   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:04.720724   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:04.754657   59162 cri.go:89] found id: ""
	I1202 12:53:04.754682   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.754689   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:04.754694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:04.754746   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:04.787583   59162 cri.go:89] found id: ""
	I1202 12:53:04.787611   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.787621   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:04.787628   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:04.787686   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:04.818962   59162 cri.go:89] found id: ""
	I1202 12:53:04.818988   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.818999   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:04.819006   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:04.819059   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:04.852015   59162 cri.go:89] found id: ""
	I1202 12:53:04.852035   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.852042   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:04.852047   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:04.852097   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:04.886272   59162 cri.go:89] found id: ""
	I1202 12:53:04.886294   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.886301   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:04.886309   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:04.886320   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:04.934682   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:04.934712   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:04.947889   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:04.947911   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:05.018970   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:05.018995   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:05.019010   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:05.098203   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:05.098233   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:07.637320   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:07.650643   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:07.650706   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:07.683468   59162 cri.go:89] found id: ""
	I1202 12:53:07.683491   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.683499   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:07.683504   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:07.683565   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:07.719765   59162 cri.go:89] found id: ""
	I1202 12:53:07.719792   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.719799   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:07.719805   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:07.719855   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:07.760939   59162 cri.go:89] found id: ""
	I1202 12:53:07.760986   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.760996   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:07.761004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:07.761066   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:07.799175   59162 cri.go:89] found id: ""
	I1202 12:53:07.799219   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.799231   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:07.799239   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:07.799300   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:07.831957   59162 cri.go:89] found id: ""
	I1202 12:53:07.831987   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.831999   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:07.832007   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:07.832067   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:07.865982   59162 cri.go:89] found id: ""
	I1202 12:53:07.866008   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.866015   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:07.866022   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:07.866080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:07.903443   59162 cri.go:89] found id: ""
	I1202 12:53:07.903467   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.903477   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:07.903484   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:07.903541   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:07.939268   59162 cri.go:89] found id: ""
	I1202 12:53:07.939293   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.939300   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:07.939310   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:07.939324   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:07.952959   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:07.952984   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:08.039178   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:08.039207   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:08.039223   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:08.121432   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:08.121469   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:08.164739   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:08.164767   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:05.652537   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:07.652894   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:04.116377   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:07.188477   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:08.250489   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:10.250657   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:10.718599   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:10.731079   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:10.731154   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:10.767605   59162 cri.go:89] found id: ""
	I1202 12:53:10.767626   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.767633   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:10.767639   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:10.767689   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:10.800464   59162 cri.go:89] found id: ""
	I1202 12:53:10.800483   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.800491   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:10.800496   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:10.800554   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:10.840808   59162 cri.go:89] found id: ""
	I1202 12:53:10.840836   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.840853   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:10.840859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:10.840922   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:10.877653   59162 cri.go:89] found id: ""
	I1202 12:53:10.877681   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.877690   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:10.877698   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:10.877755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:10.915849   59162 cri.go:89] found id: ""
	I1202 12:53:10.915873   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.915883   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:10.915891   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:10.915953   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:10.948652   59162 cri.go:89] found id: ""
	I1202 12:53:10.948680   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.948691   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:10.948697   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:10.948755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:10.983126   59162 cri.go:89] found id: ""
	I1202 12:53:10.983154   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.983165   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:10.983172   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:10.983232   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:11.015350   59162 cri.go:89] found id: ""
	I1202 12:53:11.015378   59162 logs.go:282] 0 containers: []
	W1202 12:53:11.015390   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:11.015400   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:11.015414   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:11.028713   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:11.028737   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:11.095904   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:11.095932   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:11.095950   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:11.179078   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:11.179114   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:11.216075   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:11.216106   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:09.653482   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:12.152117   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:13.272450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:12.750358   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:14.751316   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:13.774975   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:13.787745   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:13.787804   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:13.821793   59162 cri.go:89] found id: ""
	I1202 12:53:13.821824   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.821834   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:13.821840   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:13.821885   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:13.854831   59162 cri.go:89] found id: ""
	I1202 12:53:13.854855   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.854864   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:13.854871   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:13.854925   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:13.885113   59162 cri.go:89] found id: ""
	I1202 12:53:13.885142   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.885149   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:13.885155   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:13.885201   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:13.915811   59162 cri.go:89] found id: ""
	I1202 12:53:13.915841   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.915851   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:13.915859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:13.915914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:13.948908   59162 cri.go:89] found id: ""
	I1202 12:53:13.948936   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.948946   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:13.948953   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:13.949016   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:13.986502   59162 cri.go:89] found id: ""
	I1202 12:53:13.986531   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.986540   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:13.986548   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:13.986607   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:14.018182   59162 cri.go:89] found id: ""
	I1202 12:53:14.018210   59162 logs.go:282] 0 containers: []
	W1202 12:53:14.018221   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:14.018229   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:14.018287   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:14.054185   59162 cri.go:89] found id: ""
	I1202 12:53:14.054221   59162 logs.go:282] 0 containers: []
	W1202 12:53:14.054233   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:14.054244   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:14.054272   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:14.131353   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:14.131381   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:14.131402   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:14.212787   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:14.212822   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:14.254043   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:14.254073   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:14.309591   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:14.309620   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:16.824827   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:16.838150   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:16.838210   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:16.871550   59162 cri.go:89] found id: ""
	I1202 12:53:16.871570   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.871577   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:16.871582   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:16.871625   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:16.908736   59162 cri.go:89] found id: ""
	I1202 12:53:16.908766   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.908775   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:16.908781   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:16.908844   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:16.941404   59162 cri.go:89] found id: ""
	I1202 12:53:16.941427   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.941437   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:16.941444   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:16.941500   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:16.971984   59162 cri.go:89] found id: ""
	I1202 12:53:16.972011   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.972023   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:16.972030   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:16.972079   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:17.004573   59162 cri.go:89] found id: ""
	I1202 12:53:17.004596   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.004607   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:17.004614   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:17.004661   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:17.037171   59162 cri.go:89] found id: ""
	I1202 12:53:17.037199   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.037210   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:17.037218   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:17.037271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:17.070862   59162 cri.go:89] found id: ""
	I1202 12:53:17.070888   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.070899   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:17.070906   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:17.070959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:17.102642   59162 cri.go:89] found id: ""
	I1202 12:53:17.102668   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.102678   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:17.102688   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:17.102701   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:17.182590   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:17.182623   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:17.224313   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:17.224346   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:17.272831   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:17.272855   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:17.286217   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:17.286240   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:17.357274   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:14.153570   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:16.651955   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:18.654103   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:16.340429   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:17.252036   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:19.751295   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:19.858294   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:19.871731   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:19.871787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:19.906270   59162 cri.go:89] found id: ""
	I1202 12:53:19.906290   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.906297   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:19.906303   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:19.906345   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:19.937769   59162 cri.go:89] found id: ""
	I1202 12:53:19.937790   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.937797   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:19.937802   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:19.937845   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:19.971667   59162 cri.go:89] found id: ""
	I1202 12:53:19.971689   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.971706   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:19.971714   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:19.971787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:20.005434   59162 cri.go:89] found id: ""
	I1202 12:53:20.005455   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.005461   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:20.005467   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:20.005512   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:20.041817   59162 cri.go:89] found id: ""
	I1202 12:53:20.041839   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.041848   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:20.041856   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:20.041906   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:20.073923   59162 cri.go:89] found id: ""
	I1202 12:53:20.073946   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.073958   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:20.073966   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:20.074026   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:20.107360   59162 cri.go:89] found id: ""
	I1202 12:53:20.107398   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.107409   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:20.107416   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:20.107479   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:20.153919   59162 cri.go:89] found id: ""
	I1202 12:53:20.153942   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.153952   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:20.153963   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:20.153977   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:20.211581   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:20.211610   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:20.227589   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:20.227615   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:20.305225   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:20.305250   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:20.305265   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:20.382674   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:20.382713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:22.924662   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:22.940038   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:22.940101   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:22.984768   59162 cri.go:89] found id: ""
	I1202 12:53:22.984795   59162 logs.go:282] 0 containers: []
	W1202 12:53:22.984806   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:22.984815   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:22.984876   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:23.024159   59162 cri.go:89] found id: ""
	I1202 12:53:23.024180   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.024188   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:23.024194   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:23.024254   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:23.059929   59162 cri.go:89] found id: ""
	I1202 12:53:23.059948   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.059956   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:23.059961   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:23.060003   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:23.093606   59162 cri.go:89] found id: ""
	I1202 12:53:23.093627   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.093633   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:23.093639   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:23.093689   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:23.127868   59162 cri.go:89] found id: ""
	I1202 12:53:23.127893   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.127904   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:23.127910   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:23.127965   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:23.164988   59162 cri.go:89] found id: ""
	I1202 12:53:23.165006   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.165013   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:23.165018   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:23.165058   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:23.196389   59162 cri.go:89] found id: ""
	I1202 12:53:23.196412   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.196423   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:23.196430   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:23.196481   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:23.229337   59162 cri.go:89] found id: ""
	I1202 12:53:23.229358   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.229366   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:23.229376   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:23.229404   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:23.284041   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:23.284066   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:23.297861   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:23.297884   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:53:21.152126   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:23.154090   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:22.420399   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:22.250790   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:24.252122   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	W1202 12:53:23.364113   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:23.364131   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:23.364142   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:23.446244   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:23.446273   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:25.986668   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:25.998953   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:25.999013   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:26.034844   59162 cri.go:89] found id: ""
	I1202 12:53:26.034868   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.034876   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:26.034883   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:26.034938   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:26.067050   59162 cri.go:89] found id: ""
	I1202 12:53:26.067076   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.067083   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:26.067089   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:26.067152   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:26.098705   59162 cri.go:89] found id: ""
	I1202 12:53:26.098735   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.098746   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:26.098754   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:26.098812   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:26.131283   59162 cri.go:89] found id: ""
	I1202 12:53:26.131312   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.131321   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:26.131327   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:26.131379   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:26.164905   59162 cri.go:89] found id: ""
	I1202 12:53:26.164933   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.164943   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:26.164950   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:26.165009   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:26.196691   59162 cri.go:89] found id: ""
	I1202 12:53:26.196715   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.196724   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:26.196732   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:26.196789   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:26.227341   59162 cri.go:89] found id: ""
	I1202 12:53:26.227364   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.227374   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:26.227380   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:26.227436   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:26.260569   59162 cri.go:89] found id: ""
	I1202 12:53:26.260589   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.260597   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:26.260606   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:26.260619   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:26.313150   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:26.313175   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:26.327732   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:26.327762   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:26.392748   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:26.392768   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:26.392778   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:26.474456   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:26.474484   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:24.146771   58902 pod_ready.go:82] duration metric: took 4m0.000100995s for pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace to be "Ready" ...
	E1202 12:53:24.146796   58902 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace to be "Ready" (will not retry!)
	I1202 12:53:24.146811   58902 pod_ready.go:39] duration metric: took 4m6.027386938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:53:24.146852   58902 kubeadm.go:597] duration metric: took 4m15.570212206s to restartPrimaryControlPlane
	W1202 12:53:24.146901   58902 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 12:53:24.146926   58902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:53:25.492478   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:26.253906   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:28.752313   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:29.018514   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:29.032328   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:29.032457   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:29.067696   59162 cri.go:89] found id: ""
	I1202 12:53:29.067720   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.067732   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:29.067738   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:29.067794   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:29.101076   59162 cri.go:89] found id: ""
	I1202 12:53:29.101096   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.101103   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:29.101108   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:29.101150   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:29.136446   59162 cri.go:89] found id: ""
	I1202 12:53:29.136473   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.136483   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:29.136489   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:29.136552   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:29.170820   59162 cri.go:89] found id: ""
	I1202 12:53:29.170849   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.170860   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:29.170868   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:29.170931   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:29.205972   59162 cri.go:89] found id: ""
	I1202 12:53:29.206001   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.206012   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:29.206020   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:29.206086   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:29.242118   59162 cri.go:89] found id: ""
	I1202 12:53:29.242155   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.242165   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:29.242172   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:29.242222   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:29.281377   59162 cri.go:89] found id: ""
	I1202 12:53:29.281405   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.281417   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:29.281426   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:29.281487   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:29.316350   59162 cri.go:89] found id: ""
	I1202 12:53:29.316381   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.316393   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:29.316404   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:29.316418   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:29.392609   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:29.392648   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:29.430777   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:29.430804   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:29.484157   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:29.484190   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:29.498434   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:29.498457   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:29.568203   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:32.069043   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:32.081796   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:32.081867   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:32.115767   59162 cri.go:89] found id: ""
	I1202 12:53:32.115789   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.115797   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:32.115802   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:32.115861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:32.145962   59162 cri.go:89] found id: ""
	I1202 12:53:32.145984   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.145992   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:32.145999   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:32.146046   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:32.177709   59162 cri.go:89] found id: ""
	I1202 12:53:32.177734   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.177744   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:32.177752   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:32.177796   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:32.211897   59162 cri.go:89] found id: ""
	I1202 12:53:32.211921   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.211930   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:32.211937   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:32.211994   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:32.244401   59162 cri.go:89] found id: ""
	I1202 12:53:32.244425   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.244434   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:32.244442   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:32.244503   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:32.278097   59162 cri.go:89] found id: ""
	I1202 12:53:32.278123   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.278140   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:32.278151   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:32.278210   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:32.312740   59162 cri.go:89] found id: ""
	I1202 12:53:32.312774   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.312785   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:32.312793   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:32.312860   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:32.345849   59162 cri.go:89] found id: ""
	I1202 12:53:32.345878   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.345889   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:32.345901   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:32.345917   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:32.395961   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:32.395998   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:32.409582   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:32.409609   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:32.473717   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:32.473746   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:32.473763   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:32.548547   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:32.548580   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:31.572430   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:31.251492   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:33.251616   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:35.750762   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:35.088628   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:35.102152   59162 kubeadm.go:597] duration metric: took 4m2.014751799s to restartPrimaryControlPlane
	W1202 12:53:35.102217   59162 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 12:53:35.102244   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:53:36.768528   59162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.666262663s)
	I1202 12:53:36.768601   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:53:36.783104   59162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:53:36.792966   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:53:36.802188   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:53:36.802205   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:53:36.802234   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:53:36.811253   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:53:36.811290   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:53:36.820464   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:53:36.829386   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:53:36.829426   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:53:36.838814   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:53:36.847241   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:53:36.847272   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:53:36.856295   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:53:36.864892   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:53:36.864929   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:53:36.873699   59162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:53:37.076297   59162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:53:34.644489   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:38.250676   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:40.250779   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:40.724427   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:43.796493   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:42.251341   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:44.751292   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:50.547760   58902 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.400809303s)
	I1202 12:53:50.547840   58902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:53:50.564051   58902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:53:50.573674   58902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:53:50.582945   58902 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:53:50.582965   58902 kubeadm.go:157] found existing configuration files:
	
	I1202 12:53:50.582998   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:53:50.591979   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:53:50.592030   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:53:50.601043   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:53:50.609896   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:53:50.609945   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:53:50.618918   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:53:50.627599   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:53:50.627634   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:53:50.636459   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:53:50.644836   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:53:50.644880   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:53:50.653742   58902 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:53:50.698104   58902 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 12:53:50.698187   58902 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:53:50.811202   58902 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:53:50.811340   58902 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:53:50.811466   58902 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 12:53:50.822002   58902 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:53:47.252492   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:49.750168   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:50.823836   58902 out.go:235]   - Generating certificates and keys ...
	I1202 12:53:50.823933   58902 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:53:50.824031   58902 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:53:50.824141   58902 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:53:50.824223   58902 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:53:50.824328   58902 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:53:50.824402   58902 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:53:50.824500   58902 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:53:50.824583   58902 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:53:50.824697   58902 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:53:50.824826   58902 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:53:50.824896   58902 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:53:50.824984   58902 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:53:50.912363   58902 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:53:50.997719   58902 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 12:53:51.181182   58902 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:53:51.424413   58902 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:53:51.526033   58902 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:53:51.526547   58902 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:53:51.528947   58902 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:53:51.530665   58902 out.go:235]   - Booting up control plane ...
	I1202 12:53:51.530761   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:53:51.530862   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:53:51.530946   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:53:51.551867   58902 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:53:51.557869   58902 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:53:51.557960   58902 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:53:51.690048   58902 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 12:53:51.690190   58902 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 12:53:52.190616   58902 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.56624ms
	I1202 12:53:52.190735   58902 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 12:53:49.876477   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:52.948470   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:51.752318   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:54.250701   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:57.192620   58902 kubeadm.go:310] [api-check] The API server is healthy after 5.001974319s
	I1202 12:53:57.205108   58902 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 12:53:57.217398   58902 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 12:53:57.241642   58902 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 12:53:57.241842   58902 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-953044 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 12:53:57.252962   58902 kubeadm.go:310] [bootstrap-token] Using token: kqbw67.r50dkuvxntafmbtm
	I1202 12:53:57.254175   58902 out.go:235]   - Configuring RBAC rules ...
	I1202 12:53:57.254282   58902 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 12:53:57.258707   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 12:53:57.265127   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 12:53:57.268044   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 12:53:57.273630   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 12:53:57.276921   58902 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 12:53:57.598936   58902 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 12:53:58.031759   58902 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 12:53:58.598943   58902 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 12:53:58.599838   58902 kubeadm.go:310] 
	I1202 12:53:58.599900   58902 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 12:53:58.599927   58902 kubeadm.go:310] 
	I1202 12:53:58.600020   58902 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 12:53:58.600031   58902 kubeadm.go:310] 
	I1202 12:53:58.600067   58902 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 12:53:58.600150   58902 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 12:53:58.600249   58902 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 12:53:58.600266   58902 kubeadm.go:310] 
	I1202 12:53:58.600343   58902 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 12:53:58.600353   58902 kubeadm.go:310] 
	I1202 12:53:58.600418   58902 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 12:53:58.600429   58902 kubeadm.go:310] 
	I1202 12:53:58.600500   58902 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 12:53:58.600602   58902 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 12:53:58.600694   58902 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 12:53:58.600704   58902 kubeadm.go:310] 
	I1202 12:53:58.600878   58902 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 12:53:58.600996   58902 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 12:53:58.601008   58902 kubeadm.go:310] 
	I1202 12:53:58.601121   58902 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kqbw67.r50dkuvxntafmbtm \
	I1202 12:53:58.601248   58902 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 12:53:58.601281   58902 kubeadm.go:310] 	--control-plane 
	I1202 12:53:58.601298   58902 kubeadm.go:310] 
	I1202 12:53:58.601437   58902 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 12:53:58.601451   58902 kubeadm.go:310] 
	I1202 12:53:58.601570   58902 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kqbw67.r50dkuvxntafmbtm \
	I1202 12:53:58.601726   58902 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 12:53:58.601878   58902 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:53:58.602090   58902 cni.go:84] Creating CNI manager for ""
	I1202 12:53:58.602108   58902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:53:58.603597   58902 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 12:53:58.604832   58902 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 12:53:58.616597   58902 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 12:53:58.633585   58902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 12:53:58.633639   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:58.633694   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-953044 minikube.k8s.io/updated_at=2024_12_02T12_53_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=embed-certs-953044 minikube.k8s.io/primary=true
	I1202 12:53:58.843567   58902 ops.go:34] apiserver oom_adj: -16
	I1202 12:53:58.843643   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:56.252079   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:58.750596   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:59.344179   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:59.844667   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:00.343766   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:00.843808   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:01.343992   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:01.843750   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:02.344088   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:02.431425   58902 kubeadm.go:1113] duration metric: took 3.797838401s to wait for elevateKubeSystemPrivileges
	I1202 12:54:02.431466   58902 kubeadm.go:394] duration metric: took 4m53.907154853s to StartCluster
	I1202 12:54:02.431488   58902 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:54:02.431574   58902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:54:02.433388   58902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:54:02.433759   58902 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 12:54:02.433844   58902 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 12:54:02.433961   58902 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-953044"
	I1202 12:54:02.433979   58902 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-953044"
	I1202 12:54:02.433978   58902 config.go:182] Loaded profile config "embed-certs-953044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:54:02.433983   58902 addons.go:69] Setting default-storageclass=true in profile "embed-certs-953044"
	I1202 12:54:02.434009   58902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-953044"
	I1202 12:54:02.433983   58902 addons.go:69] Setting metrics-server=true in profile "embed-certs-953044"
	I1202 12:54:02.434082   58902 addons.go:234] Setting addon metrics-server=true in "embed-certs-953044"
	W1202 12:54:02.434090   58902 addons.go:243] addon metrics-server should already be in state true
	I1202 12:54:02.434121   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	W1202 12:54:02.433990   58902 addons.go:243] addon storage-provisioner should already be in state true
	I1202 12:54:02.434195   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	I1202 12:54:02.434500   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434544   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.434550   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434566   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434589   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.434606   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.435408   58902 out.go:177] * Verifying Kubernetes components...
	I1202 12:54:02.436893   58902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:54:02.450113   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I1202 12:54:02.450620   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.451022   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.451047   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.451376   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.451545   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.454345   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38059
	I1202 12:54:02.454346   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I1202 12:54:02.454788   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.454832   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.455251   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.455268   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.455281   58902 addons.go:234] Setting addon default-storageclass=true in "embed-certs-953044"
	W1202 12:54:02.455303   58902 addons.go:243] addon default-storageclass should already be in state true
	I1202 12:54:02.455336   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	I1202 12:54:02.455286   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.455377   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.455570   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.455696   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.455708   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.455739   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.456068   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.456085   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.456105   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.456122   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.470558   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I1202 12:54:02.470761   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I1202 12:54:02.470971   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471035   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43157
	I1202 12:54:02.471142   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471406   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.471426   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.471494   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471620   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.471633   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.471955   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472019   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.472035   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.472110   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.472127   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472446   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472647   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.472685   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.472721   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.474380   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.474597   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.476328   58902 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1202 12:54:02.476338   58902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:54:02.477992   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 12:54:02.478008   58902 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 12:54:02.478022   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.478549   58902 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 12:54:02.478567   58902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 12:54:02.478584   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.481364   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.481698   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.481725   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.481956   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.482008   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.482150   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.482274   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.482417   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.482503   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.482521   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.482785   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.483079   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.483352   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.483478   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.489285   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I1202 12:54:02.489644   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.490064   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.490085   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.490346   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.490510   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.491774   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.491961   58902 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 12:54:02.491974   58902 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 12:54:02.491990   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.494680   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.495069   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.495098   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.495259   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.495392   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.495582   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.495700   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.626584   58902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:54:02.650914   58902 node_ready.go:35] waiting up to 6m0s for node "embed-certs-953044" to be "Ready" ...
	I1202 12:54:02.658909   58902 node_ready.go:49] node "embed-certs-953044" has status "Ready":"True"
	I1202 12:54:02.658931   58902 node_ready.go:38] duration metric: took 7.986729ms for node "embed-certs-953044" to be "Ready" ...
	I1202 12:54:02.658939   58902 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:02.663878   58902 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:02.708572   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 12:54:02.711794   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 12:54:02.711813   58902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1202 12:54:02.729787   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 12:54:02.760573   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 12:54:02.760595   58902 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 12:54:02.814731   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 12:54:02.814756   58902 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 12:54:02.867045   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 12:54:03.549497   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.549532   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.549914   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.549970   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.549999   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.550010   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.550032   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.550256   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.550360   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.550336   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.551311   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.551333   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.551629   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.551591   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.551670   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.551686   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.551694   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.551907   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.552278   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.552295   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.577295   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.577322   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.577618   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.577631   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.577647   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.835721   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.835752   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.836073   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.836092   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.836108   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.836118   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.836460   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.836478   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.836489   58902 addons.go:475] Verifying addon metrics-server=true in "embed-certs-953044"
	I1202 12:54:03.836492   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.838858   58902 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1202 12:54:03.840263   58902 addons.go:510] duration metric: took 1.406440873s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1202 12:53:59.032460   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:02.100433   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:01.251084   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:03.252024   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:05.752273   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:04.669768   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:07.171770   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:08.180411   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:08.251624   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:10.751482   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:09.670413   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:10.669602   58902 pod_ready.go:93] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.669624   58902 pod_ready.go:82] duration metric: took 8.00571576s for pod "etcd-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.669634   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.674276   58902 pod_ready.go:93] pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.674293   58902 pod_ready.go:82] duration metric: took 4.652882ms for pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.674301   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.678330   58902 pod_ready.go:93] pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.678346   58902 pod_ready.go:82] duration metric: took 4.037883ms for pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.678354   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:12.184565   58902 pod_ready.go:93] pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:12.184591   58902 pod_ready.go:82] duration metric: took 1.506229118s for pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:12.184601   58902 pod_ready.go:39] duration metric: took 9.525652092s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:12.184622   58902 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:54:12.184683   58902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:54:12.204339   58902 api_server.go:72] duration metric: took 9.770541552s to wait for apiserver process to appear ...
	I1202 12:54:12.204361   58902 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:54:12.204383   58902 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8443/healthz ...
	I1202 12:54:12.208020   58902 api_server.go:279] https://192.168.72.203:8443/healthz returned 200:
	ok
	I1202 12:54:12.208957   58902 api_server.go:141] control plane version: v1.31.2
	I1202 12:54:12.208975   58902 api_server.go:131] duration metric: took 4.608337ms to wait for apiserver health ...
	I1202 12:54:12.208982   58902 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:54:12.215103   58902 system_pods.go:59] 9 kube-system pods found
	I1202 12:54:12.215123   58902 system_pods.go:61] "coredns-7c65d6cfc9-fwt6z" [06a23976-b261-4baa-8f66-e966addfb41a] Running
	I1202 12:54:12.215128   58902 system_pods.go:61] "coredns-7c65d6cfc9-tm4ct" [109d2f58-c2c8-4bf0-8232-fdbeb078305d] Running
	I1202 12:54:12.215132   58902 system_pods.go:61] "etcd-embed-certs-953044" [8f6fcd39-810b-45a4-91f0-9d449964beb1] Running
	I1202 12:54:12.215135   58902 system_pods.go:61] "kube-apiserver-embed-certs-953044" [976f7d34-0970-43b6-9b2a-18c2d7be0d63] Running
	I1202 12:54:12.215145   58902 system_pods.go:61] "kube-controller-manager-embed-certs-953044" [49c46bb7-8936-4c00-8764-56ae847aab27] Running
	I1202 12:54:12.215150   58902 system_pods.go:61] "kube-proxy-kg4z6" [c6b74e9c-47e4-4b1c-a219-685cc119219b] Running
	I1202 12:54:12.215157   58902 system_pods.go:61] "kube-scheduler-embed-certs-953044" [7940c2f7-a1b6-4c5b-879d-d72c3776ffbb] Running
	I1202 12:54:12.215171   58902 system_pods.go:61] "metrics-server-6867b74b74-fwhvq" [e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:12.215181   58902 system_pods.go:61] "storage-provisioner" [35fdd473-75b2-41d6-95bf-1bcab189dae5] Running
	I1202 12:54:12.215190   58902 system_pods.go:74] duration metric: took 6.203134ms to wait for pod list to return data ...
	I1202 12:54:12.215198   58902 default_sa.go:34] waiting for default service account to be created ...
	I1202 12:54:12.217406   58902 default_sa.go:45] found service account: "default"
	I1202 12:54:12.217421   58902 default_sa.go:55] duration metric: took 2.217536ms for default service account to be created ...
	I1202 12:54:12.217427   58902 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 12:54:12.221673   58902 system_pods.go:86] 9 kube-system pods found
	I1202 12:54:12.221690   58902 system_pods.go:89] "coredns-7c65d6cfc9-fwt6z" [06a23976-b261-4baa-8f66-e966addfb41a] Running
	I1202 12:54:12.221695   58902 system_pods.go:89] "coredns-7c65d6cfc9-tm4ct" [109d2f58-c2c8-4bf0-8232-fdbeb078305d] Running
	I1202 12:54:12.221701   58902 system_pods.go:89] "etcd-embed-certs-953044" [8f6fcd39-810b-45a4-91f0-9d449964beb1] Running
	I1202 12:54:12.221705   58902 system_pods.go:89] "kube-apiserver-embed-certs-953044" [976f7d34-0970-43b6-9b2a-18c2d7be0d63] Running
	I1202 12:54:12.221709   58902 system_pods.go:89] "kube-controller-manager-embed-certs-953044" [49c46bb7-8936-4c00-8764-56ae847aab27] Running
	I1202 12:54:12.221712   58902 system_pods.go:89] "kube-proxy-kg4z6" [c6b74e9c-47e4-4b1c-a219-685cc119219b] Running
	I1202 12:54:12.221716   58902 system_pods.go:89] "kube-scheduler-embed-certs-953044" [7940c2f7-a1b6-4c5b-879d-d72c3776ffbb] Running
	I1202 12:54:12.221724   58902 system_pods.go:89] "metrics-server-6867b74b74-fwhvq" [e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:12.221729   58902 system_pods.go:89] "storage-provisioner" [35fdd473-75b2-41d6-95bf-1bcab189dae5] Running
	I1202 12:54:12.221736   58902 system_pods.go:126] duration metric: took 4.304449ms to wait for k8s-apps to be running ...
	I1202 12:54:12.221745   58902 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 12:54:12.221780   58902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:54:12.238687   58902 system_svc.go:56] duration metric: took 16.934566ms WaitForService to wait for kubelet
	I1202 12:54:12.238707   58902 kubeadm.go:582] duration metric: took 9.804914519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:54:12.238722   58902 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:54:12.268746   58902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:54:12.268776   58902 node_conditions.go:123] node cpu capacity is 2
	I1202 12:54:12.268790   58902 node_conditions.go:105] duration metric: took 30.063656ms to run NodePressure ...
	I1202 12:54:12.268802   58902 start.go:241] waiting for startup goroutines ...
	I1202 12:54:12.268813   58902 start.go:246] waiting for cluster config update ...
	I1202 12:54:12.268828   58902 start.go:255] writing updated cluster config ...
	I1202 12:54:12.269149   58902 ssh_runner.go:195] Run: rm -f paused
	I1202 12:54:12.315523   58902 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 12:54:12.317559   58902 out.go:177] * Done! kubectl is now configured to use "embed-certs-953044" cluster and "default" namespace by default
	I1202 12:54:11.252465   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:13.251203   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:15.251601   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:17.332421   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:17.751347   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:20.252108   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:20.404508   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:21.252458   57877 pod_ready.go:82] duration metric: took 4m0.007570673s for pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace to be "Ready" ...
	E1202 12:54:21.252479   57877 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1202 12:54:21.252487   57877 pod_ready.go:39] duration metric: took 4m2.808635222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:21.252501   57877 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:54:21.252524   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:21.252565   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:21.311644   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:21.311663   57877 cri.go:89] found id: ""
	I1202 12:54:21.311670   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:21.311712   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.316826   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:21.316881   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:21.366930   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:21.366951   57877 cri.go:89] found id: ""
	I1202 12:54:21.366959   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:21.366999   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.371132   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:21.371194   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:21.405238   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:21.405261   57877 cri.go:89] found id: ""
	I1202 12:54:21.405270   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:21.405312   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.409631   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:21.409687   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:21.444516   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:21.444535   57877 cri.go:89] found id: ""
	I1202 12:54:21.444542   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:21.444583   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.448736   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:21.448796   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:21.485458   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:21.485484   57877 cri.go:89] found id: ""
	I1202 12:54:21.485494   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:21.485546   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.489882   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:21.489953   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:21.525951   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:21.525971   57877 cri.go:89] found id: ""
	I1202 12:54:21.525978   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:21.526028   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.530141   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:21.530186   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:21.564886   57877 cri.go:89] found id: ""
	I1202 12:54:21.564909   57877 logs.go:282] 0 containers: []
	W1202 12:54:21.564920   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:21.564928   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:21.564981   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:21.601560   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:21.601585   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:21.601593   57877 cri.go:89] found id: ""
	I1202 12:54:21.601603   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:21.601660   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.605710   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.609870   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:21.609892   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:21.645558   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:21.645581   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:21.680733   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:21.680764   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:21.731429   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:21.731452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:21.764658   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:21.764680   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:22.249475   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:22.249511   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:22.305127   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:22.305162   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:22.369496   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:22.369528   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:22.384486   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:22.384510   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:22.425402   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:22.425424   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:22.463801   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:22.463828   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:22.507022   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:22.507048   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:22.638422   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:22.638452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:25.190880   57877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:54:25.206797   57877 api_server.go:72] duration metric: took 4m14.027370187s to wait for apiserver process to appear ...
	I1202 12:54:25.206823   57877 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:54:25.206866   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:25.206924   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:25.241643   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:25.241669   57877 cri.go:89] found id: ""
	I1202 12:54:25.241680   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:25.241734   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.245997   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:25.246037   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:25.290955   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:25.290973   57877 cri.go:89] found id: ""
	I1202 12:54:25.290980   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:25.291029   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.295284   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:25.295329   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:25.333254   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:25.333275   57877 cri.go:89] found id: ""
	I1202 12:54:25.333284   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:25.333328   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.337649   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:25.337698   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:25.371662   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:25.371682   57877 cri.go:89] found id: ""
	I1202 12:54:25.371691   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:25.371739   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.376026   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:25.376075   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:25.411223   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:25.411238   57877 cri.go:89] found id: ""
	I1202 12:54:25.411245   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:25.411287   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.415307   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:25.415351   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:25.451008   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:25.451027   57877 cri.go:89] found id: ""
	I1202 12:54:25.451035   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:25.451089   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.455681   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:25.455727   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:25.499293   57877 cri.go:89] found id: ""
	I1202 12:54:25.499315   57877 logs.go:282] 0 containers: []
	W1202 12:54:25.499325   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:25.499332   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:25.499377   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:25.533874   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:25.533896   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:25.533903   57877 cri.go:89] found id: ""
	I1202 12:54:25.533912   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:25.533961   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.537993   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.541881   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:25.541899   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:25.645488   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:25.645512   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:25.683783   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:25.683807   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:26.120334   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:26.120367   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:26.484425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:26.190493   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:26.190521   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:26.235397   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:26.235421   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:26.285411   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:26.285452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:26.331807   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:26.331836   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:26.374437   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:26.374461   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:26.436459   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:26.436487   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:26.472126   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:26.472162   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:26.504819   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:26.504840   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:26.518789   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:26.518821   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:29.069521   57877 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I1202 12:54:29.074072   57877 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I1202 12:54:29.075022   57877 api_server.go:141] control plane version: v1.31.2
	I1202 12:54:29.075041   57877 api_server.go:131] duration metric: took 3.868210222s to wait for apiserver health ...
	I1202 12:54:29.075048   57877 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:54:29.075069   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:29.075112   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:29.110715   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:29.110735   57877 cri.go:89] found id: ""
	I1202 12:54:29.110742   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:29.110790   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.114994   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:29.115040   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:29.150431   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:29.150459   57877 cri.go:89] found id: ""
	I1202 12:54:29.150468   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:29.150525   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.154909   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:29.154967   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:29.198139   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:29.198162   57877 cri.go:89] found id: ""
	I1202 12:54:29.198172   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:29.198224   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.202969   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:29.203031   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:29.243771   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:29.243795   57877 cri.go:89] found id: ""
	I1202 12:54:29.243802   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:29.243843   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.248039   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:29.248106   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:29.286473   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:29.286492   57877 cri.go:89] found id: ""
	I1202 12:54:29.286498   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:29.286538   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.290543   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:29.290590   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:29.327899   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:29.327916   57877 cri.go:89] found id: ""
	I1202 12:54:29.327922   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:29.327961   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.332516   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:29.332571   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:29.368204   57877 cri.go:89] found id: ""
	I1202 12:54:29.368236   57877 logs.go:282] 0 containers: []
	W1202 12:54:29.368247   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:29.368255   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:29.368301   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:29.407333   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:29.407358   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:29.407364   57877 cri.go:89] found id: ""
	I1202 12:54:29.407372   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:29.407425   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.412153   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.416525   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:29.416548   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:29.457360   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:29.457394   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:29.495662   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:29.495691   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:29.549304   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:29.549331   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:29.585693   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:29.585718   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:29.621888   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:29.621912   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:29.670118   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:29.670153   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:29.685833   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:29.685855   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:29.792525   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:29.792555   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:29.837090   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:29.837138   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:29.872862   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:29.872893   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:30.228483   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:30.228523   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:30.298252   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:30.298285   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:32.851536   57877 system_pods.go:59] 8 kube-system pods found
	I1202 12:54:32.851562   57877 system_pods.go:61] "coredns-7c65d6cfc9-cvfc9" [f88088d1-7d48-498a-8251-f3a9ff436583] Running
	I1202 12:54:32.851567   57877 system_pods.go:61] "etcd-no-preload-658679" [950bf61a-2e04-43f9-805b-9cb98708d604] Running
	I1202 12:54:32.851571   57877 system_pods.go:61] "kube-apiserver-no-preload-658679" [3d4bfb53-da01-4973-bb3e-263a6d68463c] Running
	I1202 12:54:32.851574   57877 system_pods.go:61] "kube-controller-manager-no-preload-658679" [17db9644-06c4-4471-9186-e4de3efc7575] Running
	I1202 12:54:32.851577   57877 system_pods.go:61] "kube-proxy-2xf6j" [477778b7-12f0-4055-a583-edbf84c1a635] Running
	I1202 12:54:32.851580   57877 system_pods.go:61] "kube-scheduler-no-preload-658679" [617f769c-0ae5-4942-80c5-f85fafbc389e] Running
	I1202 12:54:32.851586   57877 system_pods.go:61] "metrics-server-6867b74b74-sn7tq" [8171d626-7036-4585-a967-8ff54f00cfc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:32.851590   57877 system_pods.go:61] "storage-provisioner" [5736f43f-6d15-41de-856a-048887f08742] Running
	I1202 12:54:32.851597   57877 system_pods.go:74] duration metric: took 3.776542886s to wait for pod list to return data ...
	I1202 12:54:32.851604   57877 default_sa.go:34] waiting for default service account to be created ...
	I1202 12:54:32.853911   57877 default_sa.go:45] found service account: "default"
	I1202 12:54:32.853928   57877 default_sa.go:55] duration metric: took 2.318516ms for default service account to be created ...
	I1202 12:54:32.853935   57877 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 12:54:32.858485   57877 system_pods.go:86] 8 kube-system pods found
	I1202 12:54:32.858508   57877 system_pods.go:89] "coredns-7c65d6cfc9-cvfc9" [f88088d1-7d48-498a-8251-f3a9ff436583] Running
	I1202 12:54:32.858513   57877 system_pods.go:89] "etcd-no-preload-658679" [950bf61a-2e04-43f9-805b-9cb98708d604] Running
	I1202 12:54:32.858519   57877 system_pods.go:89] "kube-apiserver-no-preload-658679" [3d4bfb53-da01-4973-bb3e-263a6d68463c] Running
	I1202 12:54:32.858523   57877 system_pods.go:89] "kube-controller-manager-no-preload-658679" [17db9644-06c4-4471-9186-e4de3efc7575] Running
	I1202 12:54:32.858526   57877 system_pods.go:89] "kube-proxy-2xf6j" [477778b7-12f0-4055-a583-edbf84c1a635] Running
	I1202 12:54:32.858530   57877 system_pods.go:89] "kube-scheduler-no-preload-658679" [617f769c-0ae5-4942-80c5-f85fafbc389e] Running
	I1202 12:54:32.858536   57877 system_pods.go:89] "metrics-server-6867b74b74-sn7tq" [8171d626-7036-4585-a967-8ff54f00cfc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:32.858540   57877 system_pods.go:89] "storage-provisioner" [5736f43f-6d15-41de-856a-048887f08742] Running
	I1202 12:54:32.858547   57877 system_pods.go:126] duration metric: took 4.607096ms to wait for k8s-apps to be running ...
	I1202 12:54:32.858555   57877 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 12:54:32.858592   57877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:54:32.874267   57877 system_svc.go:56] duration metric: took 15.704013ms WaitForService to wait for kubelet
	I1202 12:54:32.874293   57877 kubeadm.go:582] duration metric: took 4m21.694870267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:54:32.874311   57877 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:54:32.877737   57877 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:54:32.877757   57877 node_conditions.go:123] node cpu capacity is 2
	I1202 12:54:32.877768   57877 node_conditions.go:105] duration metric: took 3.452076ms to run NodePressure ...
	I1202 12:54:32.877782   57877 start.go:241] waiting for startup goroutines ...
	I1202 12:54:32.877791   57877 start.go:246] waiting for cluster config update ...
	I1202 12:54:32.877807   57877 start.go:255] writing updated cluster config ...
	I1202 12:54:32.878129   57877 ssh_runner.go:195] Run: rm -f paused
	I1202 12:54:32.926190   57877 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 12:54:32.927894   57877 out.go:177] * Done! kubectl is now configured to use "no-preload-658679" cluster and "default" namespace by default
	I1202 12:54:29.556420   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:35.636450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:38.708454   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:44.788462   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:47.860484   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:53.940448   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:57.012536   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:03.092433   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:06.164483   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:12.244464   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:15.316647   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:21.396479   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:24.468584   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:32.968600   59162 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:55:32.968731   59162 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:55:32.970229   59162 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:55:32.970291   59162 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:55:32.970394   59162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:55:32.970513   59162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:55:32.970629   59162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:55:32.970717   59162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:55:32.972396   59162 out.go:235]   - Generating certificates and keys ...
	I1202 12:55:32.972491   59162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:55:32.972577   59162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:55:32.972734   59162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:55:32.972823   59162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:55:32.972926   59162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:55:32.973006   59162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:55:32.973108   59162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:55:32.973192   59162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:55:32.973318   59162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:55:32.973429   59162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:55:32.973501   59162 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:55:32.973594   59162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:55:32.973658   59162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:55:32.973722   59162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:55:32.973819   59162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:55:32.973903   59162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:55:32.974041   59162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:55:32.974157   59162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:55:32.974206   59162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:55:32.974301   59162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:55:32.976508   59162 out.go:235]   - Booting up control plane ...
	I1202 12:55:32.976620   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:55:32.976741   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:55:32.976842   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:55:32.976957   59162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:55:32.977191   59162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:55:32.977281   59162 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:55:32.977342   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.977505   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.977579   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.977795   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.977906   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978091   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978174   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978394   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978497   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978743   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978756   59162 kubeadm.go:310] 
	I1202 12:55:32.978801   59162 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:55:32.978859   59162 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:55:32.978868   59162 kubeadm.go:310] 
	I1202 12:55:32.978914   59162 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:55:32.978961   59162 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:55:32.979078   59162 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:55:32.979088   59162 kubeadm.go:310] 
	I1202 12:55:32.979230   59162 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:55:32.979279   59162 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:55:32.979337   59162 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:55:32.979346   59162 kubeadm.go:310] 
	I1202 12:55:32.979484   59162 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:55:32.979580   59162 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:55:32.979593   59162 kubeadm.go:310] 
	I1202 12:55:32.979721   59162 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:55:32.979848   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:55:32.979968   59162 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:55:32.980059   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:55:32.980127   59162 kubeadm.go:310] 
	W1202 12:55:32.980202   59162 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 12:55:32.980267   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:55:33.452325   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:55:33.467527   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:55:33.477494   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:55:33.477522   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:55:33.477575   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:55:33.487333   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:55:33.487395   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:55:33.497063   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:55:33.506552   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:55:33.506605   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:55:33.515968   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:55:33.524922   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:55:33.524956   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:55:33.534339   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:55:33.543370   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:55:33.543403   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:55:33.552970   59162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:55:33.624833   59162 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:55:33.624990   59162 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:55:33.767688   59162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:55:33.767796   59162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:55:33.767909   59162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:55:33.935314   59162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:55:30.548478   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:33.624512   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:33.937193   59162 out.go:235]   - Generating certificates and keys ...
	I1202 12:55:33.937290   59162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:55:33.937402   59162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:55:33.937513   59162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:55:33.937620   59162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:55:33.937722   59162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:55:33.937791   59162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:55:33.937845   59162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:55:33.937896   59162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:55:33.937964   59162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:55:33.938028   59162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:55:33.938061   59162 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:55:33.938108   59162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:55:34.167163   59162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:55:35.008947   59162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:55:35.304057   59162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:55:35.385824   59162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:55:35.409687   59162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:55:35.413131   59162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:55:35.413218   59162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:55:35.569508   59162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:55:35.571455   59162 out.go:235]   - Booting up control plane ...
	I1202 12:55:35.571596   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:55:35.578476   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:55:35.579686   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:55:35.580586   59162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:55:35.582869   59162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:55:39.700423   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:42.772498   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:48.852452   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:51.924490   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:58.004488   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:01.076456   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:07.160425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:10.228467   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:15.585409   59162 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:56:15.585530   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:15.585792   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:16.308453   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:20.586011   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:20.586257   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:19.380488   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:25.460451   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:28.532425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:30.586783   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:30.587053   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:31.533399   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:56:31.533454   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:31.533725   61173 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653783"
	I1202 12:56:31.533749   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:31.533914   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:31.535344   61173 machine.go:96] duration metric: took 4m37.429393672s to provisionDockerMachine
	I1202 12:56:31.535386   61173 fix.go:56] duration metric: took 4m37.448634942s for fixHost
	I1202 12:56:31.535394   61173 start.go:83] releasing machines lock for "default-k8s-diff-port-653783", held for 4m37.448659715s
	W1202 12:56:31.535408   61173 start.go:714] error starting host: provision: host is not running
	W1202 12:56:31.535498   61173 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1202 12:56:31.535507   61173 start.go:729] Will try again in 5 seconds ...
	I1202 12:56:36.536323   61173 start.go:360] acquireMachinesLock for default-k8s-diff-port-653783: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:56:36.536434   61173 start.go:364] duration metric: took 71.395µs to acquireMachinesLock for "default-k8s-diff-port-653783"
	I1202 12:56:36.536463   61173 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:56:36.536471   61173 fix.go:54] fixHost starting: 
	I1202 12:56:36.536763   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:56:36.536790   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:56:36.551482   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38745
	I1202 12:56:36.551962   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:56:36.552383   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:56:36.552405   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:56:36.552689   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:56:36.552849   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:36.552968   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 12:56:36.554481   61173 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653783: state=Stopped err=<nil>
	I1202 12:56:36.554501   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	W1202 12:56:36.554652   61173 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:56:36.556508   61173 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653783" ...
	I1202 12:56:36.557534   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Start
	I1202 12:56:36.557690   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring networks are active...
	I1202 12:56:36.558371   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring network default is active
	I1202 12:56:36.558713   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring network mk-default-k8s-diff-port-653783 is active
	I1202 12:56:36.559023   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Getting domain xml...
	I1202 12:56:36.559739   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Creating domain...
	I1202 12:56:37.799440   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting to get IP...
	I1202 12:56:37.800397   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.800875   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.800918   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:37.800836   62278 retry.go:31] will retry after 192.811495ms: waiting for machine to come up
	I1202 12:56:37.995285   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.995743   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.995771   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:37.995697   62278 retry.go:31] will retry after 367.440749ms: waiting for machine to come up
	I1202 12:56:38.365229   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.365781   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.365810   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:38.365731   62278 retry.go:31] will retry after 350.196014ms: waiting for machine to come up
	I1202 12:56:38.717121   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.717650   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.717681   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:38.717590   62278 retry.go:31] will retry after 557.454725ms: waiting for machine to come up
	I1202 12:56:39.276110   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:39.276602   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:39.276631   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:39.276536   62278 retry.go:31] will retry after 735.275509ms: waiting for machine to come up
	I1202 12:56:40.013307   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.013865   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.013888   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:40.013833   62278 retry.go:31] will retry after 613.45623ms: waiting for machine to come up
	I1202 12:56:40.629220   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.629731   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.629776   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:40.629678   62278 retry.go:31] will retry after 748.849722ms: waiting for machine to come up
	I1202 12:56:41.380615   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:41.381052   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:41.381075   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:41.381023   62278 retry.go:31] will retry after 1.342160202s: waiting for machine to come up
	I1202 12:56:42.724822   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:42.725315   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:42.725355   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:42.725251   62278 retry.go:31] will retry after 1.693072543s: waiting for machine to come up
	I1202 12:56:44.420249   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:44.420700   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:44.420721   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:44.420658   62278 retry.go:31] will retry after 2.210991529s: waiting for machine to come up
	I1202 12:56:46.633486   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:46.633847   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:46.633875   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:46.633807   62278 retry.go:31] will retry after 2.622646998s: waiting for machine to come up
	I1202 12:56:50.587516   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:50.587731   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:49.257705   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:49.258232   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:49.258260   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:49.258186   62278 retry.go:31] will retry after 2.375973874s: waiting for machine to come up
	I1202 12:56:51.636055   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:51.636422   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:51.636450   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:51.636379   62278 retry.go:31] will retry after 3.118442508s: waiting for machine to come up
	I1202 12:56:54.757260   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.757665   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Found IP for machine: 192.168.39.154
	I1202 12:56:54.757689   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has current primary IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.757697   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Reserving static IP address...
	I1202 12:56:54.758088   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653783", mac: "52:54:00:00:f6:f0", ip: "192.168.39.154"} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.758108   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Reserved static IP address: 192.168.39.154
	I1202 12:56:54.758120   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | skip adding static IP to network mk-default-k8s-diff-port-653783 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653783", mac: "52:54:00:00:f6:f0", ip: "192.168.39.154"}
	I1202 12:56:54.758134   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Getting to WaitForSSH function...
	I1202 12:56:54.758142   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for SSH to be available...
	I1202 12:56:54.760333   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.760643   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.760672   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.760789   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Using SSH client type: external
	I1202 12:56:54.760812   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa (-rw-------)
	I1202 12:56:54.760855   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 12:56:54.760880   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | About to run SSH command:
	I1202 12:56:54.760892   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | exit 0
	I1202 12:56:54.884099   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | SSH cmd err, output: <nil>: 
	I1202 12:56:54.884435   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetConfigRaw
	I1202 12:56:54.885058   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:54.887519   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.887823   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.887854   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.888041   61173 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/config.json ...
	I1202 12:56:54.888333   61173 machine.go:93] provisionDockerMachine start ...
	I1202 12:56:54.888352   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:54.888564   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:54.890754   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.891062   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.891090   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.891254   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:54.891423   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:54.891560   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:54.891709   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:54.891851   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:54.892053   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:54.892070   61173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:56:54.996722   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1202 12:56:54.996751   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:54.996974   61173 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653783"
	I1202 12:56:54.997004   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:54.997202   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.000026   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.000425   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.000453   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.000624   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.000810   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.000978   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.001122   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.001308   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.001540   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.001562   61173 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653783 && echo "default-k8s-diff-port-653783" | sudo tee /etc/hostname
	I1202 12:56:55.122933   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653783
	
	I1202 12:56:55.122965   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.125788   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.126182   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.126219   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.126406   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.126555   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.126718   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.126834   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.126973   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.127180   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.127206   61173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653783/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 12:56:55.242263   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:56:55.242291   61173 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 12:56:55.242331   61173 buildroot.go:174] setting up certificates
	I1202 12:56:55.242340   61173 provision.go:84] configureAuth start
	I1202 12:56:55.242350   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:55.242604   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:55.245340   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.245685   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.245719   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.245882   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.248090   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.248481   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.248512   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.248659   61173 provision.go:143] copyHostCerts
	I1202 12:56:55.248718   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 12:56:55.248733   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:56:55.248810   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 12:56:55.248920   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 12:56:55.248931   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:56:55.248965   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 12:56:55.249039   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 12:56:55.249049   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:56:55.249081   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 12:56:55.249152   61173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653783 san=[127.0.0.1 192.168.39.154 default-k8s-diff-port-653783 localhost minikube]
	I1202 12:56:55.688887   61173 provision.go:177] copyRemoteCerts
	I1202 12:56:55.688948   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 12:56:55.688976   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.691486   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.691865   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.691896   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.692056   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.692239   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.692403   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.692524   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:55.777670   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 12:56:55.802466   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1202 12:56:55.826639   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 12:56:55.850536   61173 provision.go:87] duration metric: took 608.183552ms to configureAuth
	I1202 12:56:55.850560   61173 buildroot.go:189] setting minikube options for container-runtime
	I1202 12:56:55.850731   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:56:55.850813   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.853607   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.853991   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.854024   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.854122   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.854294   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.854436   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.854598   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.854734   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.854883   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.854899   61173 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 12:56:56.083902   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 12:56:56.083931   61173 machine.go:96] duration metric: took 1.195584241s to provisionDockerMachine
	I1202 12:56:56.083944   61173 start.go:293] postStartSetup for "default-k8s-diff-port-653783" (driver="kvm2")
	I1202 12:56:56.083957   61173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 12:56:56.083974   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.084276   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 12:56:56.084307   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.087400   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.087727   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.087750   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.087909   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.088088   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.088272   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.088448   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.170612   61173 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 12:56:56.175344   61173 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 12:56:56.175366   61173 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 12:56:56.175454   61173 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 12:56:56.175529   61173 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 12:56:56.175610   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 12:56:56.185033   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:56:56.209569   61173 start.go:296] duration metric: took 125.611321ms for postStartSetup
	I1202 12:56:56.209605   61173 fix.go:56] duration metric: took 19.673134089s for fixHost
	I1202 12:56:56.209623   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.212600   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.212883   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.212923   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.213137   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.213395   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.213575   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.213708   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.213854   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:56.214014   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:56.214032   61173 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 12:56:56.320723   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733144216.287359296
	
	I1202 12:56:56.320744   61173 fix.go:216] guest clock: 1733144216.287359296
	I1202 12:56:56.320753   61173 fix.go:229] Guest: 2024-12-02 12:56:56.287359296 +0000 UTC Remote: 2024-12-02 12:56:56.209609687 +0000 UTC m=+302.261021771 (delta=77.749609ms)
	I1202 12:56:56.320776   61173 fix.go:200] guest clock delta is within tolerance: 77.749609ms
	I1202 12:56:56.320781   61173 start.go:83] releasing machines lock for "default-k8s-diff-port-653783", held for 19.784333398s
	I1202 12:56:56.320797   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.321011   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:56.323778   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.324117   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.324136   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.324289   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324759   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324921   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324984   61173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 12:56:56.325034   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.325138   61173 ssh_runner.go:195] Run: cat /version.json
	I1202 12:56:56.325164   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.327744   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328000   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328058   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.328083   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328262   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.328373   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.328411   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328411   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.328584   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.328584   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.328774   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.328769   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.328908   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.329007   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.405370   61173 ssh_runner.go:195] Run: systemctl --version
	I1202 12:56:56.427743   61173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 12:56:56.574416   61173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 12:56:56.580858   61173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 12:56:56.580948   61173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 12:56:56.597406   61173 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 12:56:56.597427   61173 start.go:495] detecting cgroup driver to use...
	I1202 12:56:56.597472   61173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 12:56:56.612456   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 12:56:56.625811   61173 docker.go:217] disabling cri-docker service (if available) ...
	I1202 12:56:56.625847   61173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 12:56:56.642677   61173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 12:56:56.657471   61173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 12:56:56.776273   61173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 12:56:56.949746   61173 docker.go:233] disabling docker service ...
	I1202 12:56:56.949807   61173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 12:56:56.964275   61173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 12:56:56.977461   61173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 12:56:57.091134   61173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 12:56:57.209421   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 12:56:57.223153   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 12:56:57.241869   61173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 12:56:57.241933   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.252117   61173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 12:56:57.252174   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.262799   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.275039   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.285987   61173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 12:56:57.296968   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.307242   61173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.324555   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.335395   61173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 12:56:57.344411   61173 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 12:56:57.344450   61173 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 12:56:57.357400   61173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 12:56:57.366269   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:56:57.486764   61173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 12:56:57.574406   61173 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 12:56:57.574464   61173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 12:56:57.579268   61173 start.go:563] Will wait 60s for crictl version
	I1202 12:56:57.579328   61173 ssh_runner.go:195] Run: which crictl
	I1202 12:56:57.583110   61173 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 12:56:57.621921   61173 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 12:56:57.622003   61173 ssh_runner.go:195] Run: crio --version
	I1202 12:56:57.650543   61173 ssh_runner.go:195] Run: crio --version
	I1202 12:56:57.683842   61173 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 12:56:57.684861   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:57.687188   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:57.687459   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:57.687505   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:57.687636   61173 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 12:56:57.691723   61173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:56:57.704869   61173 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 12:56:57.704999   61173 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:56:57.705054   61173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:56:57.738780   61173 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 12:56:57.738828   61173 ssh_runner.go:195] Run: which lz4
	I1202 12:56:57.743509   61173 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 12:56:57.747763   61173 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 12:56:57.747784   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 12:56:59.105988   61173 crio.go:462] duration metric: took 1.362506994s to copy over tarball
	I1202 12:56:59.106062   61173 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 12:57:01.191007   61173 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.084920502s)
	I1202 12:57:01.191031   61173 crio.go:469] duration metric: took 2.085014298s to extract the tarball
	I1202 12:57:01.191038   61173 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 12:57:01.229238   61173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:57:01.272133   61173 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 12:57:01.272156   61173 cache_images.go:84] Images are preloaded, skipping loading
	I1202 12:57:01.272164   61173 kubeadm.go:934] updating node { 192.168.39.154 8444 v1.31.2 crio true true} ...
	I1202 12:57:01.272272   61173 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 12:57:01.272330   61173 ssh_runner.go:195] Run: crio config
	I1202 12:57:01.318930   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:57:01.318957   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:57:01.318968   61173 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 12:57:01.318994   61173 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653783 NodeName:default-k8s-diff-port-653783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 12:57:01.319125   61173 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.154"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 12:57:01.319184   61173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 12:57:01.330162   61173 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 12:57:01.330226   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 12:57:01.340217   61173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 12:57:01.356786   61173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 12:57:01.373210   61173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1202 12:57:01.390184   61173 ssh_runner.go:195] Run: grep 192.168.39.154	control-plane.minikube.internal$ /etc/hosts
	I1202 12:57:01.394099   61173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:57:01.406339   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:57:01.526518   61173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:57:01.543879   61173 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783 for IP: 192.168.39.154
	I1202 12:57:01.543899   61173 certs.go:194] generating shared ca certs ...
	I1202 12:57:01.543920   61173 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:57:01.544070   61173 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 12:57:01.544134   61173 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 12:57:01.544147   61173 certs.go:256] generating profile certs ...
	I1202 12:57:01.544285   61173 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/client.key
	I1202 12:57:01.544377   61173 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.key.44fa7240
	I1202 12:57:01.544429   61173 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.key
	I1202 12:57:01.544579   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 12:57:01.544608   61173 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 12:57:01.544617   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 12:57:01.544636   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 12:57:01.544659   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 12:57:01.544688   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 12:57:01.544727   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:57:01.545381   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 12:57:01.580933   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 12:57:01.621199   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 12:57:01.648996   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 12:57:01.681428   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 12:57:01.710907   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 12:57:01.741414   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 12:57:01.766158   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 12:57:01.789460   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 12:57:01.812569   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 12:57:01.836007   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 12:57:01.858137   61173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 12:57:01.874315   61173 ssh_runner.go:195] Run: openssl version
	I1202 12:57:01.880190   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 12:57:01.893051   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.898250   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.898306   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.904207   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 12:57:01.915975   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 12:57:01.927977   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.932436   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.932478   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.938049   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 12:57:01.948744   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 12:57:01.959472   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.963806   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.963839   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.969412   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 12:57:01.980743   61173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:57:01.986211   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 12:57:01.992717   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 12:57:01.998781   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 12:57:02.004934   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 12:57:02.010903   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 12:57:02.016677   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 12:57:02.022595   61173 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:57:02.022680   61173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 12:57:02.022711   61173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:57:02.060425   61173 cri.go:89] found id: ""
	I1202 12:57:02.060497   61173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 12:57:02.070807   61173 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1202 12:57:02.070827   61173 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1202 12:57:02.070868   61173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 12:57:02.081036   61173 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 12:57:02.082088   61173 kubeconfig.go:125] found "default-k8s-diff-port-653783" server: "https://192.168.39.154:8444"
	I1202 12:57:02.084179   61173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 12:57:02.094381   61173 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.154
	I1202 12:57:02.094429   61173 kubeadm.go:1160] stopping kube-system containers ...
	I1202 12:57:02.094441   61173 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 12:57:02.094485   61173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:57:02.129098   61173 cri.go:89] found id: ""
	I1202 12:57:02.129152   61173 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 12:57:02.146731   61173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:57:02.156860   61173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:57:02.156881   61173 kubeadm.go:157] found existing configuration files:
	
	I1202 12:57:02.156924   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1202 12:57:02.166273   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:57:02.166322   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:57:02.175793   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1202 12:57:02.184665   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:57:02.184707   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:57:02.194243   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1202 12:57:02.203173   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:57:02.203217   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:57:02.212563   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1202 12:57:02.221640   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:57:02.221682   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:57:02.230764   61173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:57:02.241691   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:02.353099   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.283720   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.487082   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.564623   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.644136   61173 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:57:03.644219   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:04.144882   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:04.644873   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.144778   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.645022   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.662892   61173 api_server.go:72] duration metric: took 2.01875734s to wait for apiserver process to appear ...
	I1202 12:57:05.662920   61173 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:57:05.662943   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.328451   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 12:57:08.328479   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 12:57:08.328492   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.368504   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 12:57:08.368547   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 12:57:08.664065   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.681253   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:57:08.681319   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:57:09.163310   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:09.169674   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:57:09.169699   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:57:09.663220   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:09.667397   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 200:
	ok
	I1202 12:57:09.675558   61173 api_server.go:141] control plane version: v1.31.2
	I1202 12:57:09.675582   61173 api_server.go:131] duration metric: took 4.012653559s to wait for apiserver health ...
	I1202 12:57:09.675592   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:57:09.675601   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:57:09.677275   61173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 12:57:09.678527   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 12:57:09.690640   61173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 12:57:09.708185   61173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:57:09.724719   61173 system_pods.go:59] 8 kube-system pods found
	I1202 12:57:09.724747   61173 system_pods.go:61] "coredns-7c65d6cfc9-7g74d" [a35c0ad2-6c02-4e14-afe5-887b3b5fd70f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 12:57:09.724755   61173 system_pods.go:61] "etcd-default-k8s-diff-port-653783" [25bc45db-481f-4c88-853b-105a32e1e8e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 12:57:09.724763   61173 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653783" [af0f2123-8eac-4f90-bc06-1fc1cb10deda] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 12:57:09.724769   61173 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653783" [c18b1705-438b-4954-941e-cfe5a3a0f6fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 12:57:09.724777   61173 system_pods.go:61] "kube-proxy-5t9gh" [35d08e89-5ad8-4fcb-9bff-5c12bc1fb497] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 12:57:09.724782   61173 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653783" [0db501e4-36fb-4a67-b11d-d6d9f3fa1383] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 12:57:09.724789   61173 system_pods.go:61] "metrics-server-6867b74b74-9v79b" [418c7615-5d41-4a24-b497-674f55573a0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:57:09.724794   61173 system_pods.go:61] "storage-provisioner" [dab6b0c7-8e10-435f-a57c-76044eaa11c0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 12:57:09.724799   61173 system_pods.go:74] duration metric: took 16.592713ms to wait for pod list to return data ...
	I1202 12:57:09.724808   61173 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:57:09.731235   61173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:57:09.731260   61173 node_conditions.go:123] node cpu capacity is 2
	I1202 12:57:09.731274   61173 node_conditions.go:105] duration metric: took 6.4605ms to run NodePressure ...
	I1202 12:57:09.731293   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:10.021346   61173 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1202 12:57:10.025152   61173 kubeadm.go:739] kubelet initialised
	I1202 12:57:10.025171   61173 kubeadm.go:740] duration metric: took 3.798597ms waiting for restarted kubelet to initialise ...
	I1202 12:57:10.025178   61173 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:57:10.029834   61173 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.033699   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.033718   61173 pod_ready.go:82] duration metric: took 3.86169ms for pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.033726   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.033731   61173 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.037291   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.037308   61173 pod_ready.go:82] duration metric: took 3.569468ms for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.037317   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.037322   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.041016   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.041035   61173 pod_ready.go:82] duration metric: took 3.705222ms for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.041046   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.041071   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:12.047581   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:14.048663   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:16.547831   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:19.047816   61173 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:19.047839   61173 pod_ready.go:82] duration metric: took 9.006753973s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.047850   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5t9gh" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.052277   61173 pod_ready.go:93] pod "kube-proxy-5t9gh" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:19.052296   61173 pod_ready.go:82] duration metric: took 4.440131ms for pod "kube-proxy-5t9gh" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.052305   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:21.058989   61173 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:22.558501   61173 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:22.558524   61173 pod_ready.go:82] duration metric: took 3.506212984s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:22.558533   61173 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:24.564668   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:27.064209   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:30.586451   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:57:30.586705   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:57:30.586735   59162 kubeadm.go:310] 
	I1202 12:57:30.586786   59162 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:57:30.586842   59162 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:57:30.586859   59162 kubeadm.go:310] 
	I1202 12:57:30.586924   59162 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:57:30.586990   59162 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:57:30.587140   59162 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:57:30.587152   59162 kubeadm.go:310] 
	I1202 12:57:30.587292   59162 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:57:30.587347   59162 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:57:30.587387   59162 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:57:30.587405   59162 kubeadm.go:310] 
	I1202 12:57:30.587557   59162 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:57:30.587642   59162 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:57:30.587655   59162 kubeadm.go:310] 
	I1202 12:57:30.587751   59162 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:57:30.587841   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:57:30.587923   59162 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:57:30.588029   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:57:30.588043   59162 kubeadm.go:310] 
	I1202 12:57:30.588959   59162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:57:30.589087   59162 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:57:30.589211   59162 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:57:30.589277   59162 kubeadm.go:394] duration metric: took 7m57.557592718s to StartCluster
	I1202 12:57:30.589312   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:57:30.589358   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:57:30.634368   59162 cri.go:89] found id: ""
	I1202 12:57:30.634402   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.634414   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:57:30.634423   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:57:30.634489   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:57:30.669582   59162 cri.go:89] found id: ""
	I1202 12:57:30.669605   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.669617   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:57:30.669625   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:57:30.669679   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:57:30.707779   59162 cri.go:89] found id: ""
	I1202 12:57:30.707805   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.707815   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:57:30.707823   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:57:30.707878   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:57:30.745724   59162 cri.go:89] found id: ""
	I1202 12:57:30.745751   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.745761   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:57:30.745768   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:57:30.745816   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:57:30.782946   59162 cri.go:89] found id: ""
	I1202 12:57:30.782969   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.782980   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:57:30.782987   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:57:30.783040   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:57:30.821743   59162 cri.go:89] found id: ""
	I1202 12:57:30.821776   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.821787   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:57:30.821795   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:57:30.821843   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:57:30.859754   59162 cri.go:89] found id: ""
	I1202 12:57:30.859783   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.859793   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:57:30.859801   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:57:30.859876   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:57:30.893632   59162 cri.go:89] found id: ""
	I1202 12:57:30.893660   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.893668   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:57:30.893677   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:57:30.893690   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:57:30.946387   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:57:30.946413   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:57:30.960540   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:57:30.960565   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:57:31.038246   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:57:31.038267   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:57:31.038279   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:57:31.155549   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:57:31.155584   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 12:57:31.221709   59162 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1202 12:57:31.221773   59162 out.go:270] * 
	W1202 12:57:31.221846   59162 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:57:31.221868   59162 out.go:270] * 
	W1202 12:57:31.222987   59162 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 12:57:31.226661   59162 out.go:201] 
	W1202 12:57:31.227691   59162 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:57:31.227739   59162 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 12:57:31.227763   59162 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 12:57:31.229696   59162 out.go:201] 
	I1202 12:57:29.064892   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:31.065451   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:33.564442   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:36.064844   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:38.065020   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:40.565467   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:43.065021   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:45.065674   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:47.565692   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:50.064566   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:52.065673   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:54.563919   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:56.565832   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:59.064489   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:01.064627   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:03.066470   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:05.565311   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:07.565342   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:10.065050   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:12.565026   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:15.065113   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:17.065377   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:19.570428   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:22.065941   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:24.564883   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:27.064907   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:29.565025   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:31.565662   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:33.566049   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:36.064675   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:38.064820   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:40.065555   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:42.565304   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:44.566076   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:47.064538   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:49.064571   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:51.064914   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:53.065942   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:55.564490   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:57.566484   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:00.064321   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:02.065385   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:04.065541   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:06.065687   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:08.564349   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:11.064985   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:13.065285   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:15.565546   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:17.569757   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:20.065490   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:22.565206   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:25.065588   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:27.065818   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:29.066671   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:31.565998   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:34.064527   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:36.064698   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:38.065158   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:40.563432   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:42.571603   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:45.065725   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:47.565321   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:50.065712   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:52.564522   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:55.065989   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:57.563712   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:59.565908   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:02.065655   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:04.564520   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:07.065360   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:09.566223   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:12.065149   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:14.564989   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:17.064321   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:19.066069   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:21.066247   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:23.564474   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:26.065294   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:28.563804   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:30.565317   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:32.565978   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:35.064896   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:37.065442   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:39.065516   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:41.565297   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:44.064849   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:46.564956   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:49.065151   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:51.065892   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:53.570359   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:56.064144   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:58.065042   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:00.065116   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:02.065474   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:04.564036   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:06.564531   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:08.565018   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:10.565163   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:13.065421   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:15.065623   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:17.564985   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:20.065093   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:22.065732   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:22.559325   61173 pod_ready.go:82] duration metric: took 4m0.000776679s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" ...
	E1202 13:01:22.559360   61173 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" (will not retry!)
	I1202 13:01:22.559393   61173 pod_ready.go:39] duration metric: took 4m12.534205059s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:01:22.559419   61173 kubeadm.go:597] duration metric: took 4m20.488585813s to restartPrimaryControlPlane
	W1202 13:01:22.559474   61173 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 13:01:22.559501   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 13:01:48.872503   61173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.312974314s)
	I1202 13:01:48.872571   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 13:01:48.893337   61173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 13:01:48.921145   61173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 13:01:48.934577   61173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 13:01:48.934594   61173 kubeadm.go:157] found existing configuration files:
	
	I1202 13:01:48.934639   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1202 13:01:48.956103   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 13:01:48.956162   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 13:01:48.967585   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1202 13:01:48.984040   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 13:01:48.984084   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 13:01:48.994049   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1202 13:01:49.003811   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 13:01:49.003859   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 13:01:49.013646   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1202 13:01:49.023003   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 13:01:49.023051   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 13:01:49.032678   61173 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 13:01:49.196294   61173 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 13:01:57.349437   61173 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 13:01:57.349497   61173 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 13:01:57.349571   61173 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 13:01:57.349740   61173 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 13:01:57.349882   61173 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 13:01:57.349976   61173 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 13:01:57.351474   61173 out.go:235]   - Generating certificates and keys ...
	I1202 13:01:57.351576   61173 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 13:01:57.351634   61173 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 13:01:57.351736   61173 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 13:01:57.351842   61173 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 13:01:57.351952   61173 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 13:01:57.352035   61173 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 13:01:57.352132   61173 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 13:01:57.352202   61173 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 13:01:57.352325   61173 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 13:01:57.352439   61173 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 13:01:57.352515   61173 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 13:01:57.352608   61173 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 13:01:57.352689   61173 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 13:01:57.352775   61173 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 13:01:57.352860   61173 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 13:01:57.352962   61173 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 13:01:57.353058   61173 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 13:01:57.353172   61173 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 13:01:57.353295   61173 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 13:01:57.354669   61173 out.go:235]   - Booting up control plane ...
	I1202 13:01:57.354756   61173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 13:01:57.354829   61173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 13:01:57.354884   61173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 13:01:57.354984   61173 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 13:01:57.355073   61173 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 13:01:57.355127   61173 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 13:01:57.355280   61173 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 13:01:57.355435   61173 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 13:01:57.355528   61173 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.24354ms
	I1202 13:01:57.355641   61173 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 13:01:57.355720   61173 kubeadm.go:310] [api-check] The API server is healthy after 5.002367533s
	I1202 13:01:57.355832   61173 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 13:01:57.355945   61173 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 13:01:57.356000   61173 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 13:01:57.356175   61173 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-653783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 13:01:57.356246   61173 kubeadm.go:310] [bootstrap-token] Using token: 0oxhck.9gzdpio1kzs08rgi
	I1202 13:01:57.357582   61173 out.go:235]   - Configuring RBAC rules ...
	I1202 13:01:57.357692   61173 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 13:01:57.357798   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 13:01:57.357973   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 13:01:57.358102   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 13:01:57.358246   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 13:01:57.358361   61173 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 13:01:57.358460   61173 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 13:01:57.358497   61173 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 13:01:57.358547   61173 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 13:01:57.358557   61173 kubeadm.go:310] 
	I1202 13:01:57.358615   61173 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 13:01:57.358625   61173 kubeadm.go:310] 
	I1202 13:01:57.358691   61173 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 13:01:57.358698   61173 kubeadm.go:310] 
	I1202 13:01:57.358730   61173 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 13:01:57.358800   61173 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 13:01:57.358878   61173 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 13:01:57.358889   61173 kubeadm.go:310] 
	I1202 13:01:57.358954   61173 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 13:01:57.358961   61173 kubeadm.go:310] 
	I1202 13:01:57.358999   61173 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 13:01:57.359005   61173 kubeadm.go:310] 
	I1202 13:01:57.359047   61173 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 13:01:57.359114   61173 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 13:01:57.359179   61173 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 13:01:57.359185   61173 kubeadm.go:310] 
	I1202 13:01:57.359271   61173 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 13:01:57.359364   61173 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 13:01:57.359377   61173 kubeadm.go:310] 
	I1202 13:01:57.359451   61173 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 0oxhck.9gzdpio1kzs08rgi \
	I1202 13:01:57.359561   61173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 13:01:57.359581   61173 kubeadm.go:310] 	--control-plane 
	I1202 13:01:57.359587   61173 kubeadm.go:310] 
	I1202 13:01:57.359666   61173 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 13:01:57.359678   61173 kubeadm.go:310] 
	I1202 13:01:57.359745   61173 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 0oxhck.9gzdpio1kzs08rgi \
	I1202 13:01:57.359848   61173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 13:01:57.359874   61173 cni.go:84] Creating CNI manager for ""
	I1202 13:01:57.359887   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 13:01:57.361282   61173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 13:01:57.362319   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 13:01:57.373455   61173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 13:01:57.393003   61173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 13:01:57.393055   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:57.393136   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653783 minikube.k8s.io/updated_at=2024_12_02T13_01_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=default-k8s-diff-port-653783 minikube.k8s.io/primary=true
	I1202 13:01:57.426483   61173 ops.go:34] apiserver oom_adj: -16
	I1202 13:01:57.584458   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:58.084831   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:58.585450   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:59.084976   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:59.585068   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:00.085470   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:00.584722   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:01.084770   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:01.585414   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:01.725480   61173 kubeadm.go:1113] duration metric: took 4.332474868s to wait for elevateKubeSystemPrivileges
	I1202 13:02:01.725523   61173 kubeadm.go:394] duration metric: took 4m59.70293206s to StartCluster
	I1202 13:02:01.725545   61173 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:02:01.725633   61173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 13:02:01.730008   61173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:02:01.730438   61173 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 13:02:01.730586   61173 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 13:02:01.730685   61173 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653783"
	I1202 13:02:01.730703   61173 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653783"
	I1202 13:02:01.730707   61173 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653783"
	I1202 13:02:01.730719   61173 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653783"
	I1202 13:02:01.730734   61173 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653783"
	I1202 13:02:01.730736   61173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653783"
	W1202 13:02:01.730746   61173 addons.go:243] addon metrics-server should already be in state true
	I1202 13:02:01.730776   61173 host.go:66] Checking if "default-k8s-diff-port-653783" exists ...
	W1202 13:02:01.730711   61173 addons.go:243] addon storage-provisioner should already be in state true
	I1202 13:02:01.730865   61173 host.go:66] Checking if "default-k8s-diff-port-653783" exists ...
	I1202 13:02:01.731186   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.731204   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.731215   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.731220   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.731235   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.731255   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.730707   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:02:01.731895   61173 out.go:177] * Verifying Kubernetes components...
	I1202 13:02:01.733515   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 13:02:01.748534   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1202 13:02:01.749156   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.749717   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.749743   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.750167   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.750734   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.750771   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.750997   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
	I1202 13:02:01.751714   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44867
	I1202 13:02:01.751911   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.752088   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.752388   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.752406   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.752785   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.753212   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.753240   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.753514   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.753527   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.753807   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.753953   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.756554   61173 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653783"
	W1202 13:02:01.756567   61173 addons.go:243] addon default-storageclass should already be in state true
	I1202 13:02:01.756588   61173 host.go:66] Checking if "default-k8s-diff-port-653783" exists ...
	I1202 13:02:01.756803   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.756824   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.769388   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37033
	I1202 13:02:01.769867   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.770303   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.770328   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.770810   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.770984   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.771974   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I1202 13:02:01.772430   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.773043   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.773068   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.773294   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 13:02:01.773441   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.773707   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.775187   61173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 13:02:01.775514   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 13:02:01.776461   61173 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 13:02:01.776482   61173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 13:02:01.776499   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 13:02:01.776562   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46841
	I1202 13:02:01.776927   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.777077   61173 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1202 13:02:01.777497   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.777509   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.777795   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.778197   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 13:02:01.778215   61173 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 13:02:01.778235   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 13:02:01.778284   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.778315   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.779324   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.780389   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 13:02:01.780472   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.780336   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 13:02:01.780832   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 13:02:01.780996   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 13:02:01.781101   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 13:02:01.781390   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.781588   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 13:02:01.781608   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.781737   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 13:02:01.781886   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 13:02:01.781973   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 13:02:01.782063   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 13:02:01.793947   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45243
	I1202 13:02:01.794298   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.794720   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.794737   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.795031   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.795200   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.796909   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 13:02:01.797092   61173 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 13:02:01.797104   61173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 13:02:01.797121   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 13:02:01.799831   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.800160   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 13:02:01.800191   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.800416   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 13:02:01.800595   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 13:02:01.800702   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 13:02:01.800823   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 13:02:01.936668   61173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 13:02:01.954328   61173 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653783" to be "Ready" ...
	I1202 13:02:01.968409   61173 node_ready.go:49] node "default-k8s-diff-port-653783" has status "Ready":"True"
	I1202 13:02:01.968427   61173 node_ready.go:38] duration metric: took 14.066432ms for node "default-k8s-diff-port-653783" to be "Ready" ...
	I1202 13:02:01.968436   61173 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:02:01.981818   61173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2stsx" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:02.071558   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 13:02:02.071590   61173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1202 13:02:02.076260   61173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 13:02:02.085318   61173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 13:02:02.098342   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 13:02:02.098363   61173 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 13:02:02.156135   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 13:02:02.156165   61173 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 13:02:02.175618   61173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 13:02:02.359810   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:02.359841   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:02.360111   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:02.360201   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:02.360179   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:02.360225   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:02.360246   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:02.360518   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:02.360528   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:02.360532   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:02.366246   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:02.366270   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:02.366633   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:02.366647   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:02.366660   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.134955   61173 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049592704s)
	I1202 13:02:03.135040   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135059   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.135084   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135114   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.135342   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.135392   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.135413   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135432   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.135533   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.135565   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.135584   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135602   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.136554   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.136558   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.136569   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.136568   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:03.136572   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.136579   61173 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-653783"
	I1202 13:02:03.138071   61173 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1202 13:02:03.139462   61173 addons.go:510] duration metric: took 1.408893663s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1202 13:02:03.986445   61173 pod_ready.go:93] pod "coredns-7c65d6cfc9-2stsx" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:03.986471   61173 pod_ready.go:82] duration metric: took 2.0046319s for pod "coredns-7c65d6cfc9-2stsx" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:03.986482   61173 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:04.492973   61173 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:04.492995   61173 pod_ready.go:82] duration metric: took 506.506566ms for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:04.493004   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:06.500118   61173 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 13:02:08.502468   61173 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 13:02:08.999764   61173 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:08.999785   61173 pod_ready.go:82] duration metric: took 4.506775084s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:08.999795   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.005354   61173 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:10.005376   61173 pod_ready.go:82] duration metric: took 1.005574607s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.005385   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d4vw4" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.010948   61173 pod_ready.go:93] pod "kube-proxy-d4vw4" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:10.010964   61173 pod_ready.go:82] duration metric: took 5.574069ms for pod "kube-proxy-d4vw4" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.010972   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.014901   61173 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:10.014918   61173 pod_ready.go:82] duration metric: took 3.938654ms for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.014927   61173 pod_ready.go:39] duration metric: took 8.046482137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:02:10.014943   61173 api_server.go:52] waiting for apiserver process to appear ...
	I1202 13:02:10.014994   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 13:02:10.032401   61173 api_server.go:72] duration metric: took 8.301924942s to wait for apiserver process to appear ...
	I1202 13:02:10.032418   61173 api_server.go:88] waiting for apiserver healthz status ...
	I1202 13:02:10.032436   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 13:02:10.036406   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 200:
	ok
	I1202 13:02:10.037035   61173 api_server.go:141] control plane version: v1.31.2
	I1202 13:02:10.037052   61173 api_server.go:131] duration metric: took 4.627223ms to wait for apiserver health ...
	I1202 13:02:10.037061   61173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 13:02:10.042707   61173 system_pods.go:59] 9 kube-system pods found
	I1202 13:02:10.042731   61173 system_pods.go:61] "coredns-7c65d6cfc9-2qfb5" [13f41c48-90af-4524-98fc-22daf331fbcb] Running
	I1202 13:02:10.042740   61173 system_pods.go:61] "coredns-7c65d6cfc9-2stsx" [3cb9697b-974e-4f8e-9931-38fe3d971940] Running
	I1202 13:02:10.042746   61173 system_pods.go:61] "etcd-default-k8s-diff-port-653783" [adfc38c0-b63b-404d-b279-03f3265f1cf6] Running
	I1202 13:02:10.042752   61173 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653783" [c09effaa-0cea-47db-aca6-8f1d6612b194] Running
	I1202 13:02:10.042758   61173 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653783" [7efc2e68-5d67-4ee7-8b00-e23124acdf63] Running
	I1202 13:02:10.042762   61173 system_pods.go:61] "kube-proxy-d4vw4" [487da76d-2fae-4df0-b663-0cf128ae2911] Running
	I1202 13:02:10.042768   61173 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653783" [94e85eeb-5304-4258-b76b-ac8eb0461069] Running
	I1202 13:02:10.042776   61173 system_pods.go:61] "metrics-server-6867b74b74-tcr8r" [2f017719-26ad-44ca-a44a-e6c20cd6438c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 13:02:10.042782   61173 system_pods.go:61] "storage-provisioner" [8975d342-96fa-4173-b477-e25909ca76da] Running
	I1202 13:02:10.042794   61173 system_pods.go:74] duration metric: took 5.724009ms to wait for pod list to return data ...
	I1202 13:02:10.042800   61173 default_sa.go:34] waiting for default service account to be created ...
	I1202 13:02:10.045407   61173 default_sa.go:45] found service account: "default"
	I1202 13:02:10.045422   61173 default_sa.go:55] duration metric: took 2.615305ms for default service account to be created ...
	I1202 13:02:10.045428   61173 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 13:02:10.050473   61173 system_pods.go:86] 9 kube-system pods found
	I1202 13:02:10.050494   61173 system_pods.go:89] "coredns-7c65d6cfc9-2qfb5" [13f41c48-90af-4524-98fc-22daf331fbcb] Running
	I1202 13:02:10.050499   61173 system_pods.go:89] "coredns-7c65d6cfc9-2stsx" [3cb9697b-974e-4f8e-9931-38fe3d971940] Running
	I1202 13:02:10.050505   61173 system_pods.go:89] "etcd-default-k8s-diff-port-653783" [adfc38c0-b63b-404d-b279-03f3265f1cf6] Running
	I1202 13:02:10.050510   61173 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653783" [c09effaa-0cea-47db-aca6-8f1d6612b194] Running
	I1202 13:02:10.050514   61173 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653783" [7efc2e68-5d67-4ee7-8b00-e23124acdf63] Running
	I1202 13:02:10.050518   61173 system_pods.go:89] "kube-proxy-d4vw4" [487da76d-2fae-4df0-b663-0cf128ae2911] Running
	I1202 13:02:10.050526   61173 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653783" [94e85eeb-5304-4258-b76b-ac8eb0461069] Running
	I1202 13:02:10.050532   61173 system_pods.go:89] "metrics-server-6867b74b74-tcr8r" [2f017719-26ad-44ca-a44a-e6c20cd6438c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 13:02:10.050540   61173 system_pods.go:89] "storage-provisioner" [8975d342-96fa-4173-b477-e25909ca76da] Running
	I1202 13:02:10.050547   61173 system_pods.go:126] duration metric: took 5.115018ms to wait for k8s-apps to be running ...
	I1202 13:02:10.050552   61173 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 13:02:10.050588   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 13:02:10.065454   61173 system_svc.go:56] duration metric: took 14.89671ms WaitForService to wait for kubelet
	I1202 13:02:10.065475   61173 kubeadm.go:582] duration metric: took 8.335001135s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 13:02:10.065490   61173 node_conditions.go:102] verifying NodePressure condition ...
	I1202 13:02:10.199102   61173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 13:02:10.199123   61173 node_conditions.go:123] node cpu capacity is 2
	I1202 13:02:10.199136   61173 node_conditions.go:105] duration metric: took 133.639645ms to run NodePressure ...
	I1202 13:02:10.199148   61173 start.go:241] waiting for startup goroutines ...
	I1202 13:02:10.199156   61173 start.go:246] waiting for cluster config update ...
	I1202 13:02:10.199167   61173 start.go:255] writing updated cluster config ...
	I1202 13:02:10.199421   61173 ssh_runner.go:195] Run: rm -f paused
	I1202 13:02:10.246194   61173 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 13:02:10.248146   61173 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653783" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.101581885Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144794101554891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8ad3089-73d8-4438-ac41-c36e60a8607d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.102123515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00fe46ff-8373-4f82-a57b-ea5d0ab670b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.102197838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00fe46ff-8373-4f82-a57b-ea5d0ab670b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.102232589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=00fe46ff-8373-4f82-a57b-ea5d0ab670b4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.130973038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01d7e08c-866a-4d3b-bc98-472224f1a2f9 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.131043615Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01d7e08c-866a-4d3b-bc98-472224f1a2f9 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.132652196Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19572e64-9a9d-4c29-b327-9fbfe7e42bce name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.133040433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144794133017682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19572e64-9a9d-4c29-b327-9fbfe7e42bce name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.133620550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4afe1191-37ee-4c37-854b-f96676bbc2ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.133694327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4afe1191-37ee-4c37-854b-f96676bbc2ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.133729001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4afe1191-37ee-4c37-854b-f96676bbc2ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.170292317Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df99fbc0-9230-4d8c-9e62-ba6928f9a069 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.170407104Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df99fbc0-9230-4d8c-9e62-ba6928f9a069 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.171178154Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20ab909b-2618-4095-93e7-2af8c9d9ed1a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.171631802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144794171604448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20ab909b-2618-4095-93e7-2af8c9d9ed1a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.172115022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac5406b5-3421-4707-b9b7-83cce9d6ac21 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.172161040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac5406b5-3421-4707-b9b7-83cce9d6ac21 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.172196842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ac5406b5-3421-4707-b9b7-83cce9d6ac21 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.204310000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e3ee75e-9f53-4ee4-9492-77fd69ef229f name=/runtime.v1.RuntimeService/Version
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.204431623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e3ee75e-9f53-4ee4-9492-77fd69ef229f name=/runtime.v1.RuntimeService/Version
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.205730071Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c67ca058-d50c-41fa-a60b-3db4dc975768 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.206124485Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144794206098351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c67ca058-d50c-41fa-a60b-3db4dc975768 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.206679033Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfff2b3b-a2ec-4103-8643-c7b54f429840 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.206765706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfff2b3b-a2ec-4103-8643-c7b54f429840 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:06:34 old-k8s-version-666766 crio[628]: time="2024-12-02 13:06:34.206812691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bfff2b3b-a2ec-4103-8643-c7b54f429840 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 12:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056211] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044119] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.145598] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.034204] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.629273] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.854910] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.063738] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078588] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.173629] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.134990] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.253737] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.528775] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +0.061339] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.164841] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[ +11.104852] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 2 12:53] systemd-fstab-generator[5075]: Ignoring "noauto" option for root device
	[Dec 2 12:55] systemd-fstab-generator[5352]: Ignoring "noauto" option for root device
	[  +0.070336] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:06:34 up 17 min,  0 users,  load average: 0.00, 0.03, 0.08
	Linux old-k8s-version-666766 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000bfc660, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000a2b8c0, 0x24, 0x0, ...)
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]: net.(*Dialer).DialContext(0xc0002ffec0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000a2b8c0, 0x24, 0x0, 0x0, 0x0, ...)
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc00093bb00, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000a2b8c0, 0x24, 0x60, 0x7f67c1d3e730, 0x118, ...)
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]: net/http.(*Transport).dial(0xc00054e000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000a2b8c0, 0x24, 0x0, 0x0, 0x0, ...)
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]: net/http.(*Transport).dialConn(0xc00054e000, 0x4f7fe00, 0xc000052030, 0x0, 0xc000b6e9c0, 0x5, 0xc000a2b8c0, 0x24, 0x0, 0xc000b58a20, ...)
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]: net/http.(*Transport).dialConnFor(0xc00054e000, 0xc00099f550)
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]: created by net/http.(*Transport).queueForDial
	Dec 02 13:06:30 old-k8s-version-666766 kubelet[6526]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Dec 02 13:06:30 old-k8s-version-666766 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 02 13:06:30 old-k8s-version-666766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 13:06:31 old-k8s-version-666766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Dec 02 13:06:31 old-k8s-version-666766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 02 13:06:31 old-k8s-version-666766 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 02 13:06:31 old-k8s-version-666766 kubelet[6535]: I1202 13:06:31.710765    6535 server.go:416] Version: v1.20.0
	Dec 02 13:06:31 old-k8s-version-666766 kubelet[6535]: I1202 13:06:31.710978    6535 server.go:837] Client rotation is on, will bootstrap in background
	Dec 02 13:06:31 old-k8s-version-666766 kubelet[6535]: I1202 13:06:31.712824    6535 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 02 13:06:31 old-k8s-version-666766 kubelet[6535]: W1202 13:06:31.713757    6535 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 02 13:06:31 old-k8s-version-666766 kubelet[6535]: I1202 13:06:31.713917    6535 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666766 -n old-k8s-version-666766
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 2 (236.416596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-666766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-02 13:11:10.83604911 +0000 UTC m=+6049.667834115
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-653783 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-653783 logs -n 25: (1.443768042s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666766             | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653783  | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC | 02 Dec 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC |                     |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653783       | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC | 02 Dec 24 13:02 UTC |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 13:09 UTC | 02 Dec 24 13:09 UTC |
	| start   | -p auto-256954 --memory=3072                           | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:09 UTC | 02 Dec 24 13:10 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-658679                                   | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 13:09 UTC | 02 Dec 24 13:09 UTC |
	| start   | -p kindnet-256954                                      | kindnet-256954               | jenkins | v1.34.0 | 02 Dec 24 13:09 UTC | 02 Dec 24 13:11 UTC |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p auto-256954 pgrep -a                                | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:10 UTC | 02 Dec 24 13:10 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p kindnet-256954 pgrep -a                             | kindnet-256954               | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p auto-256954 sudo cat                                | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | /etc/nsswitch.conf                                     |                              |         |         |                     |                     |
	| ssh     | -p auto-256954 sudo cat                                | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | /etc/hosts                                             |                              |         |         |                     |                     |
	| ssh     | -p auto-256954 sudo cat                                | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | /etc/resolv.conf                                       |                              |         |         |                     |                     |
	| ssh     | -p auto-256954 sudo crictl                             | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | pods                                                   |                              |         |         |                     |                     |
	| ssh     | -p auto-256954 sudo crictl ps                          | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | --all                                                  |                              |         |         |                     |                     |
	| ssh     | -p auto-256954 sudo find                               | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | /etc/cni -type f -exec sh -c                           |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p auto-256954 sudo ip a s                             | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	| ssh     | -p auto-256954 sudo ip r s                             | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	| ssh     | -p auto-256954 sudo                                    | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | iptables-save                                          |                              |         |         |                     |                     |
	| ssh     | -p auto-256954 sudo iptables                           | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | -t nat -L -n -v                                        |                              |         |         |                     |                     |
	| ssh     | -p auto-256954 sudo systemctl                          | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC |                     |
	|         | status kubelet --all --full                            |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 13:09:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 13:09:50.633242   66128 out.go:345] Setting OutFile to fd 1 ...
	I1202 13:09:50.633461   66128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 13:09:50.633470   66128 out.go:358] Setting ErrFile to fd 2...
	I1202 13:09:50.633475   66128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 13:09:50.633640   66128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 13:09:50.634190   66128 out.go:352] Setting JSON to false
	I1202 13:09:50.635186   66128 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6743,"bootTime":1733138248,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 13:09:50.635275   66128 start.go:139] virtualization: kvm guest
	I1202 13:09:50.637243   66128 out.go:177] * [kindnet-256954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 13:09:50.638505   66128 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 13:09:50.638522   66128 notify.go:220] Checking for updates...
	I1202 13:09:50.640647   66128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 13:09:50.641990   66128 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 13:09:50.643325   66128 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 13:09:50.644510   66128 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 13:09:50.645640   66128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 13:09:50.647450   66128 config.go:182] Loaded profile config "auto-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:09:50.647597   66128 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:09:50.647715   66128 config.go:182] Loaded profile config "embed-certs-953044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:09:50.647831   66128 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 13:09:50.686134   66128 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 13:09:50.687178   66128 start.go:297] selected driver: kvm2
	I1202 13:09:50.687191   66128 start.go:901] validating driver "kvm2" against <nil>
	I1202 13:09:50.687201   66128 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 13:09:50.687847   66128 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 13:09:50.687903   66128 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 13:09:50.702592   66128 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 13:09:50.702633   66128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 13:09:50.702847   66128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 13:09:50.702872   66128 cni.go:84] Creating CNI manager for "kindnet"
	I1202 13:09:50.702878   66128 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 13:09:50.702922   66128 start.go:340] cluster config:
	{Name:kindnet-256954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 13:09:50.703006   66128 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 13:09:50.704484   66128 out.go:177] * Starting "kindnet-256954" primary control-plane node in "kindnet-256954" cluster
	I1202 13:09:55.084006   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.084484   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has current primary IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.084503   65727 main.go:141] libmachine: (auto-256954) Found IP for machine: 192.168.50.47
	I1202 13:09:55.084512   65727 main.go:141] libmachine: (auto-256954) Reserving static IP address...
	I1202 13:09:55.084840   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find host DHCP lease matching {name: "auto-256954", mac: "52:54:00:e3:c5:a1", ip: "192.168.50.47"} in network mk-auto-256954
	I1202 13:09:55.157118   65727 main.go:141] libmachine: (auto-256954) DBG | Getting to WaitForSSH function...
	I1202 13:09:55.157149   65727 main.go:141] libmachine: (auto-256954) Reserved static IP address: 192.168.50.47
	I1202 13:09:55.157164   65727 main.go:141] libmachine: (auto-256954) Waiting for SSH to be available...
	I1202 13:09:55.160044   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.160474   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:55.160504   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.160639   65727 main.go:141] libmachine: (auto-256954) DBG | Using SSH client type: external
	I1202 13:09:55.160661   65727 main.go:141] libmachine: (auto-256954) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954/id_rsa (-rw-------)
	I1202 13:09:55.160691   65727 main.go:141] libmachine: (auto-256954) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 13:09:55.160705   65727 main.go:141] libmachine: (auto-256954) DBG | About to run SSH command:
	I1202 13:09:55.160721   65727 main.go:141] libmachine: (auto-256954) DBG | exit 0
	I1202 13:09:55.284186   65727 main.go:141] libmachine: (auto-256954) DBG | SSH cmd err, output: <nil>: 
	I1202 13:09:55.284510   65727 main.go:141] libmachine: (auto-256954) KVM machine creation complete!
	I1202 13:09:55.284797   65727 main.go:141] libmachine: (auto-256954) Calling .GetConfigRaw
	I1202 13:09:55.285412   65727 main.go:141] libmachine: (auto-256954) Calling .DriverName
	I1202 13:09:55.285568   65727 main.go:141] libmachine: (auto-256954) Calling .DriverName
	I1202 13:09:55.285690   65727 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 13:09:55.285701   65727 main.go:141] libmachine: (auto-256954) Calling .GetState
	I1202 13:09:55.286928   65727 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 13:09:55.286941   65727 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 13:09:55.286946   65727 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 13:09:55.286951   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:09:55.289149   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.289460   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:55.289495   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.289590   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHPort
	I1202 13:09:55.289762   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:55.289895   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:55.289999   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHUsername
	I1202 13:09:55.290140   65727 main.go:141] libmachine: Using SSH client type: native
	I1202 13:09:55.290355   65727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I1202 13:09:55.290367   65727 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 13:09:50.705539   66128 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 13:09:50.705574   66128 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 13:09:50.705583   66128 cache.go:56] Caching tarball of preloaded images
	I1202 13:09:50.705667   66128 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 13:09:50.705680   66128 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 13:09:50.705759   66128 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/config.json ...
	I1202 13:09:50.705775   66128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/config.json: {Name:mk34c7a5c36d559c2a0372c083cf6c25712591db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:09:50.705913   66128 start.go:360] acquireMachinesLock for kindnet-256954: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 13:09:56.416968   66128 start.go:364] duration metric: took 5.711026435s to acquireMachinesLock for "kindnet-256954"
	I1202 13:09:56.417031   66128 start.go:93] Provisioning new machine with config: &{Name:kindnet-256954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:kindnet-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 13:09:56.417158   66128 start.go:125] createHost starting for "" (driver="kvm2")
	I1202 13:09:55.395162   65727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 13:09:55.395187   65727 main.go:141] libmachine: Detecting the provisioner...
	I1202 13:09:55.395199   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:09:55.397877   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.398238   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:55.398257   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.398408   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHPort
	I1202 13:09:55.398593   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:55.398737   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:55.398865   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHUsername
	I1202 13:09:55.399012   65727 main.go:141] libmachine: Using SSH client type: native
	I1202 13:09:55.399174   65727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I1202 13:09:55.399185   65727 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 13:09:55.500614   65727 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 13:09:55.500716   65727 main.go:141] libmachine: found compatible host: buildroot
	I1202 13:09:55.500729   65727 main.go:141] libmachine: Provisioning with buildroot...
	I1202 13:09:55.500740   65727 main.go:141] libmachine: (auto-256954) Calling .GetMachineName
	I1202 13:09:55.501008   65727 buildroot.go:166] provisioning hostname "auto-256954"
	I1202 13:09:55.501040   65727 main.go:141] libmachine: (auto-256954) Calling .GetMachineName
	I1202 13:09:55.501242   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:09:55.503600   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.503934   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:55.503976   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.504067   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHPort
	I1202 13:09:55.504211   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:55.504391   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:55.504501   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHUsername
	I1202 13:09:55.504643   65727 main.go:141] libmachine: Using SSH client type: native
	I1202 13:09:55.504784   65727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I1202 13:09:55.504796   65727 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-256954 && echo "auto-256954" | sudo tee /etc/hostname
	I1202 13:09:55.622227   65727 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-256954
	
	I1202 13:09:55.622253   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:09:55.625247   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.625595   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:55.625617   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.625824   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHPort
	I1202 13:09:55.625998   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:55.626170   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:55.626322   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHUsername
	I1202 13:09:55.626482   65727 main.go:141] libmachine: Using SSH client type: native
	I1202 13:09:55.626683   65727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I1202 13:09:55.626705   65727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-256954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-256954/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-256954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 13:09:55.738764   65727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 13:09:55.738789   65727 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 13:09:55.738846   65727 buildroot.go:174] setting up certificates
	I1202 13:09:55.738864   65727 provision.go:84] configureAuth start
	I1202 13:09:55.738884   65727 main.go:141] libmachine: (auto-256954) Calling .GetMachineName
	I1202 13:09:55.739187   65727 main.go:141] libmachine: (auto-256954) Calling .GetIP
	I1202 13:09:55.741429   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.741816   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:55.741841   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.742004   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:09:55.744178   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.744506   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:55.744531   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.744672   65727 provision.go:143] copyHostCerts
	I1202 13:09:55.744725   65727 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 13:09:55.744735   65727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 13:09:55.744806   65727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 13:09:55.744888   65727 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 13:09:55.744896   65727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 13:09:55.744919   65727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 13:09:55.744967   65727 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 13:09:55.744973   65727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 13:09:55.744993   65727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 13:09:55.745036   65727 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.auto-256954 san=[127.0.0.1 192.168.50.47 auto-256954 localhost minikube]
	I1202 13:09:55.794660   65727 provision.go:177] copyRemoteCerts
	I1202 13:09:55.794708   65727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 13:09:55.794727   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:09:55.797161   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.797484   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:55.797507   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.797716   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHPort
	I1202 13:09:55.797878   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:55.798016   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHUsername
	I1202 13:09:55.798113   65727 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954/id_rsa Username:docker}
	I1202 13:09:55.878531   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 13:09:55.903499   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1202 13:09:55.926590   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 13:09:55.949549   65727 provision.go:87] duration metric: took 210.671754ms to configureAuth
	I1202 13:09:55.949573   65727 buildroot.go:189] setting minikube options for container-runtime
	I1202 13:09:55.949757   65727 config.go:182] Loaded profile config "auto-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:09:55.949851   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:09:55.952334   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.952667   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:55.952697   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:55.952856   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHPort
	I1202 13:09:55.953018   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:55.953151   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:55.953261   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHUsername
	I1202 13:09:55.953380   65727 main.go:141] libmachine: Using SSH client type: native
	I1202 13:09:55.953591   65727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I1202 13:09:55.953615   65727 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 13:09:56.174972   65727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 13:09:56.175001   65727 main.go:141] libmachine: Checking connection to Docker...
	I1202 13:09:56.175011   65727 main.go:141] libmachine: (auto-256954) Calling .GetURL
	I1202 13:09:56.176203   65727 main.go:141] libmachine: (auto-256954) DBG | Using libvirt version 6000000
	I1202 13:09:56.178299   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.178609   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:56.178649   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.178784   65727 main.go:141] libmachine: Docker is up and running!
	I1202 13:09:56.178802   65727 main.go:141] libmachine: Reticulating splines...
	I1202 13:09:56.178811   65727 client.go:171] duration metric: took 25.760024364s to LocalClient.Create
	I1202 13:09:56.178839   65727 start.go:167] duration metric: took 25.76012112s to libmachine.API.Create "auto-256954"
	I1202 13:09:56.178854   65727 start.go:293] postStartSetup for "auto-256954" (driver="kvm2")
	I1202 13:09:56.178867   65727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 13:09:56.178890   65727 main.go:141] libmachine: (auto-256954) Calling .DriverName
	I1202 13:09:56.179113   65727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 13:09:56.179135   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:09:56.181176   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.181454   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:56.181482   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.181604   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHPort
	I1202 13:09:56.181767   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:56.181909   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHUsername
	I1202 13:09:56.182053   65727 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954/id_rsa Username:docker}
	I1202 13:09:56.262400   65727 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 13:09:56.266676   65727 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 13:09:56.266701   65727 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 13:09:56.266777   65727 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 13:09:56.266882   65727 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 13:09:56.267000   65727 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 13:09:56.276208   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 13:09:56.300477   65727 start.go:296] duration metric: took 121.61222ms for postStartSetup
	I1202 13:09:56.300521   65727 main.go:141] libmachine: (auto-256954) Calling .GetConfigRaw
	I1202 13:09:56.301141   65727 main.go:141] libmachine: (auto-256954) Calling .GetIP
	I1202 13:09:56.304079   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.304530   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:56.304556   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.304823   65727 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/config.json ...
	I1202 13:09:56.305098   65727 start.go:128] duration metric: took 25.904573927s to createHost
	I1202 13:09:56.305131   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:09:56.307515   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.307849   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:56.307871   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.307984   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHPort
	I1202 13:09:56.308138   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:56.308312   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:56.308464   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHUsername
	I1202 13:09:56.308644   65727 main.go:141] libmachine: Using SSH client type: native
	I1202 13:09:56.308875   65727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I1202 13:09:56.308892   65727 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 13:09:56.416839   65727 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733144996.386302485
	
	I1202 13:09:56.416858   65727 fix.go:216] guest clock: 1733144996.386302485
	I1202 13:09:56.416874   65727 fix.go:229] Guest: 2024-12-02 13:09:56.386302485 +0000 UTC Remote: 2024-12-02 13:09:56.305116435 +0000 UTC m=+26.013712022 (delta=81.18605ms)
	I1202 13:09:56.416891   65727 fix.go:200] guest clock delta is within tolerance: 81.18605ms
	I1202 13:09:56.416896   65727 start.go:83] releasing machines lock for "auto-256954", held for 26.016492442s
	I1202 13:09:56.416922   65727 main.go:141] libmachine: (auto-256954) Calling .DriverName
	I1202 13:09:56.417163   65727 main.go:141] libmachine: (auto-256954) Calling .GetIP
	I1202 13:09:56.419867   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.420210   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:56.420259   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.420411   65727 main.go:141] libmachine: (auto-256954) Calling .DriverName
	I1202 13:09:56.420886   65727 main.go:141] libmachine: (auto-256954) Calling .DriverName
	I1202 13:09:56.421054   65727 main.go:141] libmachine: (auto-256954) Calling .DriverName
	I1202 13:09:56.421181   65727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 13:09:56.421223   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:09:56.421253   65727 ssh_runner.go:195] Run: cat /version.json
	I1202 13:09:56.421273   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:09:56.424089   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.424350   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.424476   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:56.424498   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.424643   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHPort
	I1202 13:09:56.424750   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:56.424782   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:56.424793   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:56.424948   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHUsername
	I1202 13:09:56.424969   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHPort
	I1202 13:09:56.425087   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:09:56.425088   65727 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954/id_rsa Username:docker}
	I1202 13:09:56.425202   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHUsername
	I1202 13:09:56.425339   65727 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954/id_rsa Username:docker}
	I1202 13:09:56.523776   65727 ssh_runner.go:195] Run: systemctl --version
	I1202 13:09:56.532622   65727 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 13:09:56.707538   65727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 13:09:56.714736   65727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 13:09:56.714799   65727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 13:09:56.733479   65727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 13:09:56.733513   65727 start.go:495] detecting cgroup driver to use...
	I1202 13:09:56.733578   65727 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 13:09:56.750907   65727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 13:09:56.766395   65727 docker.go:217] disabling cri-docker service (if available) ...
	I1202 13:09:56.766457   65727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 13:09:56.782214   65727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 13:09:56.797735   65727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 13:09:56.919427   65727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 13:09:57.075619   65727 docker.go:233] disabling docker service ...
	I1202 13:09:57.075693   65727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 13:09:57.091077   65727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 13:09:57.103563   65727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 13:09:57.243889   65727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 13:09:57.381723   65727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 13:09:57.396663   65727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 13:09:57.414725   65727 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 13:09:57.414783   65727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:09:57.425051   65727 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 13:09:57.425099   65727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:09:57.435226   65727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:09:57.445289   65727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:09:57.455331   65727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 13:09:57.465682   65727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:09:57.476712   65727 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:09:57.495674   65727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:09:57.505631   65727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 13:09:57.515301   65727 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 13:09:57.515353   65727 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 13:09:57.529365   65727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 13:09:57.538744   65727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 13:09:57.661053   65727 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 13:09:57.769066   65727 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 13:09:57.769131   65727 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 13:09:57.774967   65727 start.go:563] Will wait 60s for crictl version
	I1202 13:09:57.775022   65727 ssh_runner.go:195] Run: which crictl
	I1202 13:09:57.780036   65727 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 13:09:57.829993   65727 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 13:09:57.830077   65727 ssh_runner.go:195] Run: crio --version
	I1202 13:09:57.864833   65727 ssh_runner.go:195] Run: crio --version
	I1202 13:09:57.903571   65727 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 13:09:57.904824   65727 main.go:141] libmachine: (auto-256954) Calling .GetIP
	I1202 13:09:57.908045   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:57.908419   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:09:57.908443   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:57.908684   65727 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1202 13:09:57.913078   65727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 13:09:57.927657   65727 kubeadm.go:883] updating cluster {Name:auto-256954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:auto-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 13:09:57.927773   65727 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 13:09:57.927838   65727 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 13:09:57.961816   65727 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 13:09:57.961871   65727 ssh_runner.go:195] Run: which lz4
	I1202 13:09:57.966224   65727 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 13:09:57.970962   65727 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 13:09:57.970989   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 13:09:59.451060   65727 crio.go:462] duration metric: took 1.484862355s to copy over tarball
	I1202 13:09:59.451152   65727 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 13:09:56.419206   66128 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1202 13:09:56.419404   66128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:09:56.419448   66128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:09:56.436689   66128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I1202 13:09:56.437102   66128 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:09:56.437655   66128 main.go:141] libmachine: Using API Version  1
	I1202 13:09:56.437676   66128 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:09:56.438028   66128 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:09:56.438199   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetMachineName
	I1202 13:09:56.438351   66128 main.go:141] libmachine: (kindnet-256954) Calling .DriverName
	I1202 13:09:56.438516   66128 start.go:159] libmachine.API.Create for "kindnet-256954" (driver="kvm2")
	I1202 13:09:56.438548   66128 client.go:168] LocalClient.Create starting
	I1202 13:09:56.438595   66128 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 13:09:56.438631   66128 main.go:141] libmachine: Decoding PEM data...
	I1202 13:09:56.438652   66128 main.go:141] libmachine: Parsing certificate...
	I1202 13:09:56.438713   66128 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 13:09:56.438739   66128 main.go:141] libmachine: Decoding PEM data...
	I1202 13:09:56.438759   66128 main.go:141] libmachine: Parsing certificate...
	I1202 13:09:56.438785   66128 main.go:141] libmachine: Running pre-create checks...
	I1202 13:09:56.438796   66128 main.go:141] libmachine: (kindnet-256954) Calling .PreCreateCheck
	I1202 13:09:56.439131   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetConfigRaw
	I1202 13:09:56.439555   66128 main.go:141] libmachine: Creating machine...
	I1202 13:09:56.439568   66128 main.go:141] libmachine: (kindnet-256954) Calling .Create
	I1202 13:09:56.439698   66128 main.go:141] libmachine: (kindnet-256954) Creating KVM machine...
	I1202 13:09:56.440756   66128 main.go:141] libmachine: (kindnet-256954) DBG | found existing default KVM network
	I1202 13:09:56.441930   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:09:56.441783   66194 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:4f:fe} reservation:<nil>}
	I1202 13:09:56.443235   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:09:56.443154   66194 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:4e:c9:c2} reservation:<nil>}
	I1202 13:09:56.444332   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:09:56.444252   66194 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030a9c0}
	I1202 13:09:56.444364   66128 main.go:141] libmachine: (kindnet-256954) DBG | created network xml: 
	I1202 13:09:56.444383   66128 main.go:141] libmachine: (kindnet-256954) DBG | <network>
	I1202 13:09:56.444396   66128 main.go:141] libmachine: (kindnet-256954) DBG |   <name>mk-kindnet-256954</name>
	I1202 13:09:56.444403   66128 main.go:141] libmachine: (kindnet-256954) DBG |   <dns enable='no'/>
	I1202 13:09:56.444416   66128 main.go:141] libmachine: (kindnet-256954) DBG |   
	I1202 13:09:56.444428   66128 main.go:141] libmachine: (kindnet-256954) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1202 13:09:56.444436   66128 main.go:141] libmachine: (kindnet-256954) DBG |     <dhcp>
	I1202 13:09:56.444449   66128 main.go:141] libmachine: (kindnet-256954) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1202 13:09:56.444460   66128 main.go:141] libmachine: (kindnet-256954) DBG |     </dhcp>
	I1202 13:09:56.444470   66128 main.go:141] libmachine: (kindnet-256954) DBG |   </ip>
	I1202 13:09:56.444481   66128 main.go:141] libmachine: (kindnet-256954) DBG |   
	I1202 13:09:56.444493   66128 main.go:141] libmachine: (kindnet-256954) DBG | </network>
	I1202 13:09:56.444506   66128 main.go:141] libmachine: (kindnet-256954) DBG | 
	I1202 13:09:56.449111   66128 main.go:141] libmachine: (kindnet-256954) DBG | trying to create private KVM network mk-kindnet-256954 192.168.61.0/24...
	I1202 13:09:56.518266   66128 main.go:141] libmachine: (kindnet-256954) DBG | private KVM network mk-kindnet-256954 192.168.61.0/24 created
	I1202 13:09:56.518308   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:09:56.518253   66194 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 13:09:56.518321   66128 main.go:141] libmachine: (kindnet-256954) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954 ...
	I1202 13:09:56.518352   66128 main.go:141] libmachine: (kindnet-256954) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 13:09:56.518461   66128 main.go:141] libmachine: (kindnet-256954) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 13:09:56.784871   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:09:56.784760   66194 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954/id_rsa...
	I1202 13:09:57.031221   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:09:57.031079   66194 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954/kindnet-256954.rawdisk...
	I1202 13:09:57.031248   66128 main.go:141] libmachine: (kindnet-256954) DBG | Writing magic tar header
	I1202 13:09:57.031273   66128 main.go:141] libmachine: (kindnet-256954) DBG | Writing SSH key tar header
	I1202 13:09:57.031290   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:09:57.031197   66194 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954 ...
	I1202 13:09:57.031306   66128 main.go:141] libmachine: (kindnet-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954
	I1202 13:09:57.031318   66128 main.go:141] libmachine: (kindnet-256954) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954 (perms=drwx------)
	I1202 13:09:57.031328   66128 main.go:141] libmachine: (kindnet-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 13:09:57.031342   66128 main.go:141] libmachine: (kindnet-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 13:09:57.031354   66128 main.go:141] libmachine: (kindnet-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 13:09:57.031366   66128 main.go:141] libmachine: (kindnet-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 13:09:57.031378   66128 main.go:141] libmachine: (kindnet-256954) DBG | Checking permissions on dir: /home/jenkins
	I1202 13:09:57.031390   66128 main.go:141] libmachine: (kindnet-256954) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 13:09:57.031423   66128 main.go:141] libmachine: (kindnet-256954) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 13:09:57.031447   66128 main.go:141] libmachine: (kindnet-256954) DBG | Checking permissions on dir: /home
	I1202 13:09:57.031454   66128 main.go:141] libmachine: (kindnet-256954) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 13:09:57.031468   66128 main.go:141] libmachine: (kindnet-256954) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 13:09:57.031479   66128 main.go:141] libmachine: (kindnet-256954) DBG | Skipping /home - not owner
	I1202 13:09:57.031492   66128 main.go:141] libmachine: (kindnet-256954) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 13:09:57.031521   66128 main.go:141] libmachine: (kindnet-256954) Creating domain...
	I1202 13:09:57.032622   66128 main.go:141] libmachine: (kindnet-256954) define libvirt domain using xml: 
	I1202 13:09:57.032648   66128 main.go:141] libmachine: (kindnet-256954) <domain type='kvm'>
	I1202 13:09:57.032659   66128 main.go:141] libmachine: (kindnet-256954)   <name>kindnet-256954</name>
	I1202 13:09:57.032670   66128 main.go:141] libmachine: (kindnet-256954)   <memory unit='MiB'>3072</memory>
	I1202 13:09:57.032679   66128 main.go:141] libmachine: (kindnet-256954)   <vcpu>2</vcpu>
	I1202 13:09:57.032689   66128 main.go:141] libmachine: (kindnet-256954)   <features>
	I1202 13:09:57.032700   66128 main.go:141] libmachine: (kindnet-256954)     <acpi/>
	I1202 13:09:57.032714   66128 main.go:141] libmachine: (kindnet-256954)     <apic/>
	I1202 13:09:57.032723   66128 main.go:141] libmachine: (kindnet-256954)     <pae/>
	I1202 13:09:57.032732   66128 main.go:141] libmachine: (kindnet-256954)     
	I1202 13:09:57.032738   66128 main.go:141] libmachine: (kindnet-256954)   </features>
	I1202 13:09:57.032749   66128 main.go:141] libmachine: (kindnet-256954)   <cpu mode='host-passthrough'>
	I1202 13:09:57.032757   66128 main.go:141] libmachine: (kindnet-256954)   
	I1202 13:09:57.032768   66128 main.go:141] libmachine: (kindnet-256954)   </cpu>
	I1202 13:09:57.032776   66128 main.go:141] libmachine: (kindnet-256954)   <os>
	I1202 13:09:57.032787   66128 main.go:141] libmachine: (kindnet-256954)     <type>hvm</type>
	I1202 13:09:57.032798   66128 main.go:141] libmachine: (kindnet-256954)     <boot dev='cdrom'/>
	I1202 13:09:57.032807   66128 main.go:141] libmachine: (kindnet-256954)     <boot dev='hd'/>
	I1202 13:09:57.032819   66128 main.go:141] libmachine: (kindnet-256954)     <bootmenu enable='no'/>
	I1202 13:09:57.032832   66128 main.go:141] libmachine: (kindnet-256954)   </os>
	I1202 13:09:57.032841   66128 main.go:141] libmachine: (kindnet-256954)   <devices>
	I1202 13:09:57.032846   66128 main.go:141] libmachine: (kindnet-256954)     <disk type='file' device='cdrom'>
	I1202 13:09:57.032865   66128 main.go:141] libmachine: (kindnet-256954)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954/boot2docker.iso'/>
	I1202 13:09:57.032878   66128 main.go:141] libmachine: (kindnet-256954)       <target dev='hdc' bus='scsi'/>
	I1202 13:09:57.032886   66128 main.go:141] libmachine: (kindnet-256954)       <readonly/>
	I1202 13:09:57.032896   66128 main.go:141] libmachine: (kindnet-256954)     </disk>
	I1202 13:09:57.032930   66128 main.go:141] libmachine: (kindnet-256954)     <disk type='file' device='disk'>
	I1202 13:09:57.032951   66128 main.go:141] libmachine: (kindnet-256954)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 13:09:57.032965   66128 main.go:141] libmachine: (kindnet-256954)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954/kindnet-256954.rawdisk'/>
	I1202 13:09:57.032977   66128 main.go:141] libmachine: (kindnet-256954)       <target dev='hda' bus='virtio'/>
	I1202 13:09:57.032986   66128 main.go:141] libmachine: (kindnet-256954)     </disk>
	I1202 13:09:57.033001   66128 main.go:141] libmachine: (kindnet-256954)     <interface type='network'>
	I1202 13:09:57.033010   66128 main.go:141] libmachine: (kindnet-256954)       <source network='mk-kindnet-256954'/>
	I1202 13:09:57.033016   66128 main.go:141] libmachine: (kindnet-256954)       <model type='virtio'/>
	I1202 13:09:57.033024   66128 main.go:141] libmachine: (kindnet-256954)     </interface>
	I1202 13:09:57.033031   66128 main.go:141] libmachine: (kindnet-256954)     <interface type='network'>
	I1202 13:09:57.033041   66128 main.go:141] libmachine: (kindnet-256954)       <source network='default'/>
	I1202 13:09:57.033050   66128 main.go:141] libmachine: (kindnet-256954)       <model type='virtio'/>
	I1202 13:09:57.033058   66128 main.go:141] libmachine: (kindnet-256954)     </interface>
	I1202 13:09:57.033067   66128 main.go:141] libmachine: (kindnet-256954)     <serial type='pty'>
	I1202 13:09:57.033085   66128 main.go:141] libmachine: (kindnet-256954)       <target port='0'/>
	I1202 13:09:57.033097   66128 main.go:141] libmachine: (kindnet-256954)     </serial>
	I1202 13:09:57.033106   66128 main.go:141] libmachine: (kindnet-256954)     <console type='pty'>
	I1202 13:09:57.033120   66128 main.go:141] libmachine: (kindnet-256954)       <target type='serial' port='0'/>
	I1202 13:09:57.033131   66128 main.go:141] libmachine: (kindnet-256954)     </console>
	I1202 13:09:57.033140   66128 main.go:141] libmachine: (kindnet-256954)     <rng model='virtio'>
	I1202 13:09:57.033154   66128 main.go:141] libmachine: (kindnet-256954)       <backend model='random'>/dev/random</backend>
	I1202 13:09:57.033173   66128 main.go:141] libmachine: (kindnet-256954)     </rng>
	I1202 13:09:57.033185   66128 main.go:141] libmachine: (kindnet-256954)     
	I1202 13:09:57.033197   66128 main.go:141] libmachine: (kindnet-256954)     
	I1202 13:09:57.033208   66128 main.go:141] libmachine: (kindnet-256954)   </devices>
	I1202 13:09:57.033217   66128 main.go:141] libmachine: (kindnet-256954) </domain>
	I1202 13:09:57.033228   66128 main.go:141] libmachine: (kindnet-256954) 
	I1202 13:09:57.037102   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:4a:a6:42 in network default
	I1202 13:09:57.037651   66128 main.go:141] libmachine: (kindnet-256954) Ensuring networks are active...
	I1202 13:09:57.037669   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:09:57.038229   66128 main.go:141] libmachine: (kindnet-256954) Ensuring network default is active
	I1202 13:09:57.038549   66128 main.go:141] libmachine: (kindnet-256954) Ensuring network mk-kindnet-256954 is active
	I1202 13:09:57.039095   66128 main.go:141] libmachine: (kindnet-256954) Getting domain xml...
	I1202 13:09:57.039779   66128 main.go:141] libmachine: (kindnet-256954) Creating domain...
	I1202 13:09:58.403698   66128 main.go:141] libmachine: (kindnet-256954) Waiting to get IP...
	I1202 13:09:58.404717   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:09:58.405319   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:09:58.405377   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:09:58.405311   66194 retry.go:31] will retry after 196.380603ms: waiting for machine to come up
	I1202 13:09:58.604030   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:09:58.604606   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:09:58.604634   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:09:58.604557   66194 retry.go:31] will retry after 316.252562ms: waiting for machine to come up
	I1202 13:09:58.922115   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:09:58.922665   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:09:58.922696   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:09:58.922594   66194 retry.go:31] will retry after 293.249145ms: waiting for machine to come up
	I1202 13:09:59.217150   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:09:59.217728   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:09:59.217759   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:09:59.217681   66194 retry.go:31] will retry after 487.312819ms: waiting for machine to come up
	I1202 13:09:59.706291   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:09:59.706889   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:09:59.706920   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:09:59.706838   66194 retry.go:31] will retry after 660.608321ms: waiting for machine to come up
	I1202 13:10:00.369490   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:00.369994   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:10:00.370023   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:10:00.369961   66194 retry.go:31] will retry after 683.26431ms: waiting for machine to come up
	I1202 13:10:01.814524   65727 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.363342976s)
	I1202 13:10:01.814550   65727 crio.go:469] duration metric: took 2.363459589s to extract the tarball
	I1202 13:10:01.814558   65727 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 13:10:01.861456   65727 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 13:10:01.904409   65727 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 13:10:01.904437   65727 cache_images.go:84] Images are preloaded, skipping loading
	I1202 13:10:01.904445   65727 kubeadm.go:934] updating node { 192.168.50.47 8443 v1.31.2 crio true true} ...
	I1202 13:10:01.904533   65727 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-256954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:auto-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 13:10:01.904598   65727 ssh_runner.go:195] Run: crio config
	I1202 13:10:01.953151   65727 cni.go:84] Creating CNI manager for ""
	I1202 13:10:01.953175   65727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 13:10:01.953184   65727 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 13:10:01.953203   65727 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.47 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-256954 NodeName:auto-256954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 13:10:01.953324   65727 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-256954"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.47"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.47"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 13:10:01.953381   65727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 13:10:01.965147   65727 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 13:10:01.965219   65727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 13:10:01.976266   65727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1202 13:10:01.994278   65727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 13:10:02.011903   65727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2288 bytes)
	I1202 13:10:02.030001   65727 ssh_runner.go:195] Run: grep 192.168.50.47	control-plane.minikube.internal$ /etc/hosts
	I1202 13:10:02.033854   65727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 13:10:02.046407   65727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 13:10:02.173862   65727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 13:10:02.192363   65727 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954 for IP: 192.168.50.47
	I1202 13:10:02.192390   65727 certs.go:194] generating shared ca certs ...
	I1202 13:10:02.192411   65727 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:02.192600   65727 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 13:10:02.192658   65727 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 13:10:02.192673   65727 certs.go:256] generating profile certs ...
	I1202 13:10:02.192755   65727 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/client.key
	I1202 13:10:02.192790   65727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/client.crt with IP's: []
	I1202 13:10:02.491980   65727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/client.crt ...
	I1202 13:10:02.492005   65727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/client.crt: {Name:mkce402251ca4dcf197ba238a7ec49e4fb2b58be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:02.492188   65727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/client.key ...
	I1202 13:10:02.492202   65727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/client.key: {Name:mkf2149a6769cc2f5fa9a7ce2a0dfb673d08b33c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:02.492328   65727 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/apiserver.key.94c64f5b
	I1202 13:10:02.492347   65727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/apiserver.crt.94c64f5b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.47]
	I1202 13:10:02.602397   65727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/apiserver.crt.94c64f5b ...
	I1202 13:10:02.602430   65727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/apiserver.crt.94c64f5b: {Name:mk7f1af0391c2cb9178cc79ddf8f0ddda5391083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:02.602624   65727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/apiserver.key.94c64f5b ...
	I1202 13:10:02.602642   65727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/apiserver.key.94c64f5b: {Name:mk1b7c44503e287c54eddfa93794b70e9896a923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:02.602755   65727 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/apiserver.crt.94c64f5b -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/apiserver.crt
	I1202 13:10:02.602852   65727 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/apiserver.key.94c64f5b -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/apiserver.key
	I1202 13:10:02.602930   65727 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/proxy-client.key
	I1202 13:10:02.602955   65727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/proxy-client.crt with IP's: []
	I1202 13:10:02.689042   65727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/proxy-client.crt ...
	I1202 13:10:02.689073   65727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/proxy-client.crt: {Name:mkd53d4cfc431db637aca7d9ea1e74e65d945f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:02.689256   65727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/proxy-client.key ...
	I1202 13:10:02.689270   65727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/proxy-client.key: {Name:mkc16ea29eb38e9c255b680f2d1ecae8c922b6ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:02.689478   65727 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 13:10:02.689519   65727 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 13:10:02.689531   65727 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 13:10:02.689559   65727 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 13:10:02.689585   65727 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 13:10:02.689612   65727 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 13:10:02.689667   65727 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 13:10:02.690260   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 13:10:02.715402   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 13:10:02.741760   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 13:10:02.766307   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 13:10:02.798833   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1202 13:10:02.828350   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 13:10:02.858134   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 13:10:02.886363   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 13:10:02.909571   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 13:10:02.944561   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 13:10:02.970127   65727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 13:10:02.997521   65727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 13:10:03.021047   65727 ssh_runner.go:195] Run: openssl version
	I1202 13:10:03.028800   65727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 13:10:03.040735   65727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 13:10:03.045413   65727 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 13:10:03.045472   65727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 13:10:03.051334   65727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 13:10:03.061900   65727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 13:10:03.072505   65727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 13:10:03.077016   65727 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 13:10:03.077069   65727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 13:10:03.083051   65727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 13:10:03.094080   65727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 13:10:03.104801   65727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 13:10:03.109905   65727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 13:10:03.109959   65727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 13:10:03.116032   65727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 13:10:03.131662   65727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 13:10:03.137127   65727 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 13:10:03.137179   65727 kubeadm.go:392] StartCluster: {Name:auto-256954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clu
sterName:auto-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 13:10:03.137246   65727 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 13:10:03.137280   65727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 13:10:03.174709   65727 cri.go:89] found id: ""
	I1202 13:10:03.174802   65727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 13:10:03.188051   65727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 13:10:03.201687   65727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 13:10:03.215094   65727 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 13:10:03.215110   65727 kubeadm.go:157] found existing configuration files:
	
	I1202 13:10:03.215145   65727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 13:10:03.224360   65727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 13:10:03.224413   65727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 13:10:03.234819   65727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 13:10:03.246555   65727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 13:10:03.246606   65727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 13:10:03.260287   65727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 13:10:03.270954   65727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 13:10:03.271007   65727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 13:10:03.282538   65727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 13:10:03.291485   65727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 13:10:03.291527   65727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 13:10:03.301186   65727 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 13:10:03.495029   65727 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 13:10:01.054509   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:01.054891   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:10:01.054917   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:10:01.054858   66194 retry.go:31] will retry after 1.0592382s: waiting for machine to come up
	I1202 13:10:02.115720   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:02.116182   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:10:02.116211   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:10:02.116119   66194 retry.go:31] will retry after 1.10738439s: waiting for machine to come up
	I1202 13:10:03.225429   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:03.225936   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:10:03.225965   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:10:03.225889   66194 retry.go:31] will retry after 1.810657381s: waiting for machine to come up
	I1202 13:10:05.037937   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:05.038435   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:10:05.038468   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:10:05.038390   66194 retry.go:31] will retry after 1.757123147s: waiting for machine to come up
	I1202 13:10:06.796967   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:06.797506   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:10:06.797538   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:10:06.797442   66194 retry.go:31] will retry after 2.732535593s: waiting for machine to come up
	I1202 13:10:09.531608   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:09.532240   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:10:09.532270   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:10:09.532173   66194 retry.go:31] will retry after 2.589437908s: waiting for machine to come up
	I1202 13:10:13.527930   65727 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 13:10:13.528027   65727 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 13:10:13.528131   65727 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 13:10:13.528293   65727 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 13:10:13.528451   65727 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 13:10:13.528571   65727 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 13:10:13.530062   65727 out.go:235]   - Generating certificates and keys ...
	I1202 13:10:13.530165   65727 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 13:10:13.530252   65727 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 13:10:13.530355   65727 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 13:10:13.530430   65727 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 13:10:13.530521   65727 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 13:10:13.530610   65727 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 13:10:13.530674   65727 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 13:10:13.530813   65727 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-256954 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	I1202 13:10:13.530892   65727 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 13:10:13.531040   65727 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-256954 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	I1202 13:10:13.531132   65727 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 13:10:13.531229   65727 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 13:10:13.531294   65727 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 13:10:13.531355   65727 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 13:10:13.531406   65727 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 13:10:13.531455   65727 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 13:10:13.531500   65727 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 13:10:13.531554   65727 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 13:10:13.531601   65727 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 13:10:13.531669   65727 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 13:10:13.531747   65727 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 13:10:13.533418   65727 out.go:235]   - Booting up control plane ...
	I1202 13:10:13.533496   65727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 13:10:13.533575   65727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 13:10:13.533649   65727 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 13:10:13.533774   65727 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 13:10:13.533860   65727 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 13:10:13.533920   65727 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 13:10:13.534065   65727 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 13:10:13.534164   65727 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 13:10:13.534230   65727 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001974335s
	I1202 13:10:13.534311   65727 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 13:10:13.534367   65727 kubeadm.go:310] [api-check] The API server is healthy after 5.001974165s
	I1202 13:10:13.534463   65727 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 13:10:13.534574   65727 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 13:10:13.534627   65727 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 13:10:13.534785   65727 kubeadm.go:310] [mark-control-plane] Marking the node auto-256954 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 13:10:13.534834   65727 kubeadm.go:310] [bootstrap-token] Using token: j3qpi9.sf02yu3m82qa6ljl
	I1202 13:10:13.536017   65727 out.go:235]   - Configuring RBAC rules ...
	I1202 13:10:13.536136   65727 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 13:10:13.536265   65727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 13:10:13.536440   65727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 13:10:13.536614   65727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 13:10:13.536773   65727 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 13:10:13.536898   65727 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 13:10:13.537060   65727 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 13:10:13.537123   65727 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 13:10:13.537172   65727 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 13:10:13.537178   65727 kubeadm.go:310] 
	I1202 13:10:13.537242   65727 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 13:10:13.537252   65727 kubeadm.go:310] 
	I1202 13:10:13.537342   65727 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 13:10:13.537352   65727 kubeadm.go:310] 
	I1202 13:10:13.537377   65727 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 13:10:13.537430   65727 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 13:10:13.537473   65727 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 13:10:13.537479   65727 kubeadm.go:310] 
	I1202 13:10:13.537530   65727 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 13:10:13.537538   65727 kubeadm.go:310] 
	I1202 13:10:13.537582   65727 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 13:10:13.537588   65727 kubeadm.go:310] 
	I1202 13:10:13.537630   65727 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 13:10:13.537736   65727 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 13:10:13.537829   65727 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 13:10:13.537843   65727 kubeadm.go:310] 
	I1202 13:10:13.537968   65727 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 13:10:13.538075   65727 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 13:10:13.538083   65727 kubeadm.go:310] 
	I1202 13:10:13.538154   65727 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j3qpi9.sf02yu3m82qa6ljl \
	I1202 13:10:13.538245   65727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 13:10:13.538273   65727 kubeadm.go:310] 	--control-plane 
	I1202 13:10:13.538284   65727 kubeadm.go:310] 
	I1202 13:10:13.538400   65727 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 13:10:13.538413   65727 kubeadm.go:310] 
	I1202 13:10:13.538528   65727 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j3qpi9.sf02yu3m82qa6ljl \
	I1202 13:10:13.538682   65727 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 13:10:13.538703   65727 cni.go:84] Creating CNI manager for ""
	I1202 13:10:13.538715   65727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 13:10:13.540029   65727 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 13:10:13.541184   65727 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 13:10:13.553921   65727 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 13:10:13.573514   65727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 13:10:13.573557   65727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:13.573640   65727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-256954 minikube.k8s.io/updated_at=2024_12_02T13_10_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=auto-256954 minikube.k8s.io/primary=true
	I1202 13:10:13.731818   65727 ops.go:34] apiserver oom_adj: -16
	I1202 13:10:13.731941   65727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:14.233037   65727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:14.732866   65727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:15.232377   65727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:12.122871   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:12.123400   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:10:12.123420   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:10:12.123354   66194 retry.go:31] will retry after 4.087998504s: waiting for machine to come up
	I1202 13:10:15.732159   65727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:16.232678   65727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:16.732255   65727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:17.232291   65727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:17.732328   65727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:17.848700   65727 kubeadm.go:1113] duration metric: took 4.275186795s to wait for elevateKubeSystemPrivileges
	I1202 13:10:17.848734   65727 kubeadm.go:394] duration metric: took 14.711557871s to StartCluster
	I1202 13:10:17.848767   65727 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:17.848839   65727 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 13:10:17.851112   65727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:17.851362   65727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 13:10:17.851364   65727 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 13:10:17.851410   65727 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 13:10:17.851526   65727 addons.go:69] Setting storage-provisioner=true in profile "auto-256954"
	I1202 13:10:17.851569   65727 addons.go:234] Setting addon storage-provisioner=true in "auto-256954"
	I1202 13:10:17.851579   65727 addons.go:69] Setting default-storageclass=true in profile "auto-256954"
	I1202 13:10:17.851585   65727 config.go:182] Loaded profile config "auto-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:10:17.851598   65727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-256954"
	I1202 13:10:17.851607   65727 host.go:66] Checking if "auto-256954" exists ...
	I1202 13:10:17.852014   65727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:10:17.852065   65727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:10:17.852146   65727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:10:17.852180   65727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:10:17.852682   65727 out.go:177] * Verifying Kubernetes components...
	I1202 13:10:17.854371   65727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 13:10:17.867170   65727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I1202 13:10:17.867215   65727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1202 13:10:17.867562   65727 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:10:17.867613   65727 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:10:17.868140   65727 main.go:141] libmachine: Using API Version  1
	I1202 13:10:17.868155   65727 main.go:141] libmachine: Using API Version  1
	I1202 13:10:17.868165   65727 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:10:17.868172   65727 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:10:17.868565   65727 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:10:17.868570   65727 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:10:17.868746   65727 main.go:141] libmachine: (auto-256954) Calling .GetState
	I1202 13:10:17.869220   65727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:10:17.869269   65727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:10:17.872778   65727 addons.go:234] Setting addon default-storageclass=true in "auto-256954"
	I1202 13:10:17.872823   65727 host.go:66] Checking if "auto-256954" exists ...
	I1202 13:10:17.873208   65727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:10:17.873253   65727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:10:17.886369   65727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I1202 13:10:17.886945   65727 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:10:17.887484   65727 main.go:141] libmachine: Using API Version  1
	I1202 13:10:17.887513   65727 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:10:17.887892   65727 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:10:17.888095   65727 main.go:141] libmachine: (auto-256954) Calling .GetState
	I1202 13:10:17.888395   65727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46685
	I1202 13:10:17.888715   65727 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:10:17.889090   65727 main.go:141] libmachine: Using API Version  1
	I1202 13:10:17.889106   65727 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:10:17.889461   65727 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:10:17.889999   65727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:10:17.890026   65727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:10:17.890199   65727 main.go:141] libmachine: (auto-256954) Calling .DriverName
	I1202 13:10:17.892015   65727 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 13:10:17.893413   65727 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 13:10:17.893426   65727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 13:10:17.893439   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:10:17.896631   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:10:17.897088   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:10:17.897114   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:10:17.897774   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHPort
	I1202 13:10:17.897934   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:10:17.898078   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHUsername
	I1202 13:10:17.898211   65727 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954/id_rsa Username:docker}
	I1202 13:10:17.904670   65727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38255
	I1202 13:10:17.905051   65727 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:10:17.905526   65727 main.go:141] libmachine: Using API Version  1
	I1202 13:10:17.905543   65727 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:10:17.905852   65727 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:10:17.906025   65727 main.go:141] libmachine: (auto-256954) Calling .GetState
	I1202 13:10:17.907356   65727 main.go:141] libmachine: (auto-256954) Calling .DriverName
	I1202 13:10:17.907511   65727 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 13:10:17.907521   65727 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 13:10:17.907532   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHHostname
	I1202 13:10:17.910136   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:10:17.910558   65727 main.go:141] libmachine: (auto-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:c5:a1", ip: ""} in network mk-auto-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:09:45 +0000 UTC Type:0 Mac:52:54:00:e3:c5:a1 Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:auto-256954 Clientid:01:52:54:00:e3:c5:a1}
	I1202 13:10:17.910579   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined IP address 192.168.50.47 and MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:10:17.910728   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHPort
	I1202 13:10:17.910885   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHKeyPath
	I1202 13:10:17.911034   65727 main.go:141] libmachine: (auto-256954) Calling .GetSSHUsername
	I1202 13:10:17.911171   65727 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954/id_rsa Username:docker}
	I1202 13:10:18.124712   65727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 13:10:18.124821   65727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 13:10:18.304764   65727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 13:10:18.319997   65727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 13:10:18.776668   65727 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1202 13:10:18.778352   65727 node_ready.go:35] waiting up to 15m0s for node "auto-256954" to be "Ready" ...
	I1202 13:10:18.804111   65727 node_ready.go:49] node "auto-256954" has status "Ready":"True"
	I1202 13:10:18.804143   65727 node_ready.go:38] duration metric: took 25.764331ms for node "auto-256954" to be "Ready" ...
	I1202 13:10:18.804156   65727 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:10:18.815220   65727 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-f8zzz" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:19.290443   65727 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-256954" context rescaled to 1 replicas
	I1202 13:10:19.442917   65727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.138113334s)
	I1202 13:10:19.442957   65727 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.122925423s)
	I1202 13:10:19.442972   65727 main.go:141] libmachine: Making call to close driver server
	I1202 13:10:19.442985   65727 main.go:141] libmachine: (auto-256954) Calling .Close
	I1202 13:10:19.442998   65727 main.go:141] libmachine: Making call to close driver server
	I1202 13:10:19.443013   65727 main.go:141] libmachine: (auto-256954) Calling .Close
	I1202 13:10:19.443276   65727 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:10:19.443315   65727 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:10:19.443337   65727 main.go:141] libmachine: Making call to close driver server
	I1202 13:10:19.443350   65727 main.go:141] libmachine: (auto-256954) Calling .Close
	I1202 13:10:19.443381   65727 main.go:141] libmachine: (auto-256954) DBG | Closing plugin on server side
	I1202 13:10:19.443502   65727 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:10:19.443538   65727 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:10:19.443547   65727 main.go:141] libmachine: Making call to close driver server
	I1202 13:10:19.443559   65727 main.go:141] libmachine: (auto-256954) Calling .Close
	I1202 13:10:19.443562   65727 main.go:141] libmachine: (auto-256954) DBG | Closing plugin on server side
	I1202 13:10:19.443593   65727 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:10:19.443601   65727 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:10:19.443789   65727 main.go:141] libmachine: (auto-256954) DBG | Closing plugin on server side
	I1202 13:10:19.443792   65727 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:10:19.443833   65727 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:10:19.456171   65727 main.go:141] libmachine: Making call to close driver server
	I1202 13:10:19.456186   65727 main.go:141] libmachine: (auto-256954) Calling .Close
	I1202 13:10:19.456421   65727 main.go:141] libmachine: (auto-256954) DBG | Closing plugin on server side
	I1202 13:10:19.456431   65727 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:10:19.456444   65727 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:10:19.458106   65727 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1202 13:10:19.459500   65727 addons.go:510] duration metric: took 1.608073907s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 13:10:16.215319   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:16.215636   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find current IP address of domain kindnet-256954 in network mk-kindnet-256954
	I1202 13:10:16.215660   66128 main.go:141] libmachine: (kindnet-256954) DBG | I1202 13:10:16.215593   66194 retry.go:31] will retry after 5.438303243s: waiting for machine to come up
	I1202 13:10:21.655872   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:21.656443   66128 main.go:141] libmachine: (kindnet-256954) Found IP for machine: 192.168.61.241
	I1202 13:10:21.656470   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has current primary IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:21.656478   66128 main.go:141] libmachine: (kindnet-256954) Reserving static IP address...
	I1202 13:10:21.656836   66128 main.go:141] libmachine: (kindnet-256954) DBG | unable to find host DHCP lease matching {name: "kindnet-256954", mac: "52:54:00:5b:c8:76", ip: "192.168.61.241"} in network mk-kindnet-256954
	I1202 13:10:21.728430   66128 main.go:141] libmachine: (kindnet-256954) DBG | Getting to WaitForSSH function...
	I1202 13:10:21.728455   66128 main.go:141] libmachine: (kindnet-256954) Reserved static IP address: 192.168.61.241
	I1202 13:10:21.728468   66128 main.go:141] libmachine: (kindnet-256954) Waiting for SSH to be available...
	I1202 13:10:21.731164   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:21.731692   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:21.731717   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:21.731880   66128 main.go:141] libmachine: (kindnet-256954) DBG | Using SSH client type: external
	I1202 13:10:21.731903   66128 main.go:141] libmachine: (kindnet-256954) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954/id_rsa (-rw-------)
	I1202 13:10:21.731932   66128 main.go:141] libmachine: (kindnet-256954) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 13:10:21.731946   66128 main.go:141] libmachine: (kindnet-256954) DBG | About to run SSH command:
	I1202 13:10:21.731962   66128 main.go:141] libmachine: (kindnet-256954) DBG | exit 0
	I1202 13:10:21.856719   66128 main.go:141] libmachine: (kindnet-256954) DBG | SSH cmd err, output: <nil>: 
	I1202 13:10:21.856957   66128 main.go:141] libmachine: (kindnet-256954) KVM machine creation complete!
	I1202 13:10:21.857296   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetConfigRaw
	I1202 13:10:21.857865   66128 main.go:141] libmachine: (kindnet-256954) Calling .DriverName
	I1202 13:10:21.858006   66128 main.go:141] libmachine: (kindnet-256954) Calling .DriverName
	I1202 13:10:21.858185   66128 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 13:10:21.858199   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetState
	I1202 13:10:21.859535   66128 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 13:10:21.859545   66128 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 13:10:21.859550   66128 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 13:10:21.859554   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:21.862226   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:21.862596   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:21.862618   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:21.862773   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHPort
	I1202 13:10:21.862943   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:21.863089   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:21.863211   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHUsername
	I1202 13:10:21.863354   66128 main.go:141] libmachine: Using SSH client type: native
	I1202 13:10:21.863559   66128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.241 22 <nil> <nil>}
	I1202 13:10:21.863572   66128 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 13:10:21.967446   66128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 13:10:21.967474   66128 main.go:141] libmachine: Detecting the provisioner...
	I1202 13:10:21.967484   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:21.970308   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:21.970643   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:21.970671   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:21.970797   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHPort
	I1202 13:10:21.970968   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:21.971143   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:21.971273   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHUsername
	I1202 13:10:21.971464   66128 main.go:141] libmachine: Using SSH client type: native
	I1202 13:10:21.971631   66128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.241 22 <nil> <nil>}
	I1202 13:10:21.971645   66128 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 13:10:22.073163   66128 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 13:10:22.073259   66128 main.go:141] libmachine: found compatible host: buildroot
	I1202 13:10:22.073270   66128 main.go:141] libmachine: Provisioning with buildroot...
	I1202 13:10:22.073281   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetMachineName
	I1202 13:10:22.073526   66128 buildroot.go:166] provisioning hostname "kindnet-256954"
	I1202 13:10:22.073559   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetMachineName
	I1202 13:10:22.073767   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:22.076907   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.077312   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:22.077354   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.077463   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHPort
	I1202 13:10:22.077633   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:22.077807   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:22.077996   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHUsername
	I1202 13:10:22.078187   66128 main.go:141] libmachine: Using SSH client type: native
	I1202 13:10:22.078427   66128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.241 22 <nil> <nil>}
	I1202 13:10:22.078447   66128 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-256954 && echo "kindnet-256954" | sudo tee /etc/hostname
	I1202 13:10:22.199070   66128 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-256954
	
	I1202 13:10:22.199104   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:22.201718   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.202043   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:22.202070   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.202186   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHPort
	I1202 13:10:22.202371   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:22.202518   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:22.202682   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHUsername
	I1202 13:10:22.202865   66128 main.go:141] libmachine: Using SSH client type: native
	I1202 13:10:22.203055   66128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.241 22 <nil> <nil>}
	I1202 13:10:22.203078   66128 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-256954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-256954/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-256954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 13:10:22.312964   66128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 13:10:22.313000   66128 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 13:10:22.313034   66128 buildroot.go:174] setting up certificates
	I1202 13:10:22.313051   66128 provision.go:84] configureAuth start
	I1202 13:10:22.313068   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetMachineName
	I1202 13:10:22.313333   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetIP
	I1202 13:10:22.315739   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.316164   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:22.316190   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.316378   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:22.319034   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.319409   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:22.319434   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.319583   66128 provision.go:143] copyHostCerts
	I1202 13:10:22.319659   66128 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 13:10:22.319672   66128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 13:10:22.319745   66128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 13:10:22.319845   66128 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 13:10:22.319854   66128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 13:10:22.319880   66128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 13:10:22.319950   66128 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 13:10:22.319959   66128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 13:10:22.319992   66128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 13:10:22.320056   66128 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.kindnet-256954 san=[127.0.0.1 192.168.61.241 kindnet-256954 localhost minikube]
	I1202 13:10:22.378592   66128 provision.go:177] copyRemoteCerts
	I1202 13:10:22.378642   66128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 13:10:22.378661   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:22.381186   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.381521   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:22.381540   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.381690   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHPort
	I1202 13:10:22.381852   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:22.382007   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHUsername
	I1202 13:10:22.382126   66128 sshutil.go:53] new ssh client: &{IP:192.168.61.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954/id_rsa Username:docker}
	I1202 13:10:22.463290   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 13:10:22.489685   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1202 13:10:22.516164   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 13:10:22.542794   66128 provision.go:87] duration metric: took 229.727751ms to configureAuth
	I1202 13:10:22.542818   66128 buildroot.go:189] setting minikube options for container-runtime
	I1202 13:10:22.542972   66128 config.go:182] Loaded profile config "kindnet-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:10:22.543045   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:22.545709   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.546027   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:22.546055   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.546247   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHPort
	I1202 13:10:22.546422   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:22.546605   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:22.546740   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHUsername
	I1202 13:10:22.546866   66128 main.go:141] libmachine: Using SSH client type: native
	I1202 13:10:22.547092   66128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.241 22 <nil> <nil>}
	I1202 13:10:22.547110   66128 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 13:10:22.772888   66128 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 13:10:22.772927   66128 main.go:141] libmachine: Checking connection to Docker...
	I1202 13:10:22.772939   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetURL
	I1202 13:10:22.774205   66128 main.go:141] libmachine: (kindnet-256954) DBG | Using libvirt version 6000000
	I1202 13:10:22.776268   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.776658   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:22.776700   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.776881   66128 main.go:141] libmachine: Docker is up and running!
	I1202 13:10:22.776892   66128 main.go:141] libmachine: Reticulating splines...
	I1202 13:10:22.776900   66128 client.go:171] duration metric: took 26.338342064s to LocalClient.Create
	I1202 13:10:22.776930   66128 start.go:167] duration metric: took 26.338416238s to libmachine.API.Create "kindnet-256954"
	I1202 13:10:22.776942   66128 start.go:293] postStartSetup for "kindnet-256954" (driver="kvm2")
	I1202 13:10:22.776955   66128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 13:10:22.776985   66128 main.go:141] libmachine: (kindnet-256954) Calling .DriverName
	I1202 13:10:22.777223   66128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 13:10:22.777244   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:22.779470   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.779823   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:22.779850   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.779973   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHPort
	I1202 13:10:22.780145   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:22.780303   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHUsername
	I1202 13:10:22.780408   66128 sshutil.go:53] new ssh client: &{IP:192.168.61.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954/id_rsa Username:docker}
	I1202 13:10:22.863494   66128 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 13:10:22.868015   66128 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 13:10:22.868040   66128 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 13:10:22.868099   66128 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 13:10:22.868200   66128 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 13:10:22.868338   66128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 13:10:22.877635   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 13:10:22.902264   66128 start.go:296] duration metric: took 125.309884ms for postStartSetup
	I1202 13:10:22.902310   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetConfigRaw
	I1202 13:10:22.902830   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetIP
	I1202 13:10:22.905280   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.905679   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:22.905705   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.905933   66128 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/config.json ...
	I1202 13:10:22.906134   66128 start.go:128] duration metric: took 26.488964361s to createHost
	I1202 13:10:22.906158   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:22.908293   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.908638   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:22.908661   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:22.908844   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHPort
	I1202 13:10:22.909070   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:22.909262   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:22.909428   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHUsername
	I1202 13:10:22.909580   66128 main.go:141] libmachine: Using SSH client type: native
	I1202 13:10:22.909727   66128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.241 22 <nil> <nil>}
	I1202 13:10:22.909737   66128 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 13:10:23.013095   66128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733145022.998241844
	
	I1202 13:10:23.013112   66128 fix.go:216] guest clock: 1733145022.998241844
	I1202 13:10:23.013118   66128 fix.go:229] Guest: 2024-12-02 13:10:22.998241844 +0000 UTC Remote: 2024-12-02 13:10:22.906146216 +0000 UTC m=+32.309254230 (delta=92.095628ms)
	I1202 13:10:23.013136   66128 fix.go:200] guest clock delta is within tolerance: 92.095628ms
	I1202 13:10:23.013140   66128 start.go:83] releasing machines lock for "kindnet-256954", held for 26.596143266s
	I1202 13:10:23.013156   66128 main.go:141] libmachine: (kindnet-256954) Calling .DriverName
	I1202 13:10:23.013405   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetIP
	I1202 13:10:23.016289   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:23.016656   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:23.016681   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:23.016831   66128 main.go:141] libmachine: (kindnet-256954) Calling .DriverName
	I1202 13:10:23.017270   66128 main.go:141] libmachine: (kindnet-256954) Calling .DriverName
	I1202 13:10:23.017416   66128 main.go:141] libmachine: (kindnet-256954) Calling .DriverName
	I1202 13:10:23.017515   66128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 13:10:23.017552   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:23.017605   66128 ssh_runner.go:195] Run: cat /version.json
	I1202 13:10:23.017626   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:23.020088   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:23.020458   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:23.020486   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:23.020555   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:23.020645   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHPort
	I1202 13:10:23.020802   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:23.020962   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHUsername
	I1202 13:10:23.020989   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:23.021006   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:23.021090   66128 sshutil.go:53] new ssh client: &{IP:192.168.61.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954/id_rsa Username:docker}
	I1202 13:10:23.021179   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHPort
	I1202 13:10:23.021321   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:23.021469   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHUsername
	I1202 13:10:23.021596   66128 sshutil.go:53] new ssh client: &{IP:192.168.61.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954/id_rsa Username:docker}
	I1202 13:10:23.121636   66128 ssh_runner.go:195] Run: systemctl --version
	I1202 13:10:23.127614   66128 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 13:10:23.286593   66128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 13:10:23.294275   66128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 13:10:23.294347   66128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 13:10:23.311427   66128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 13:10:23.311450   66128 start.go:495] detecting cgroup driver to use...
	I1202 13:10:23.311519   66128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 13:10:23.328021   66128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 13:10:23.343077   66128 docker.go:217] disabling cri-docker service (if available) ...
	I1202 13:10:23.343136   66128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 13:10:23.357963   66128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 13:10:23.372861   66128 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 13:10:23.489774   66128 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 13:10:23.665603   66128 docker.go:233] disabling docker service ...
	I1202 13:10:23.665678   66128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 13:10:23.680183   66128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 13:10:23.693285   66128 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 13:10:23.822404   66128 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 13:10:23.946513   66128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 13:10:23.961881   66128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 13:10:23.981699   66128 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 13:10:23.981758   66128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:10:23.992852   66128 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 13:10:23.992918   66128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:10:24.003957   66128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:10:24.015203   66128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:10:24.026188   66128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 13:10:24.037083   66128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:10:24.047331   66128 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:10:24.064415   66128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:10:24.075038   66128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 13:10:24.086109   66128 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 13:10:24.086171   66128 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 13:10:24.100185   66128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 13:10:24.109314   66128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 13:10:24.224910   66128 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 13:10:24.323836   66128 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 13:10:24.323916   66128 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 13:10:24.329071   66128 start.go:563] Will wait 60s for crictl version
	I1202 13:10:24.329122   66128 ssh_runner.go:195] Run: which crictl
	I1202 13:10:24.334520   66128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 13:10:24.382205   66128 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 13:10:24.382296   66128 ssh_runner.go:195] Run: crio --version
	I1202 13:10:24.415562   66128 ssh_runner.go:195] Run: crio --version
	I1202 13:10:24.446813   66128 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 13:10:20.821928   65727 pod_ready.go:103] pod "coredns-7c65d6cfc9-f8zzz" in "kube-system" namespace has status "Ready":"False"
	I1202 13:10:22.822637   65727 pod_ready.go:103] pod "coredns-7c65d6cfc9-f8zzz" in "kube-system" namespace has status "Ready":"False"
	I1202 13:10:24.823337   65727 pod_ready.go:103] pod "coredns-7c65d6cfc9-f8zzz" in "kube-system" namespace has status "Ready":"False"
	I1202 13:10:24.447807   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetIP
	I1202 13:10:24.450377   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:24.450695   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:24.450717   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:24.450925   66128 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1202 13:10:24.455412   66128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 13:10:24.468064   66128 kubeadm.go:883] updating cluster {Name:kindnet-256954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:kindnet-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 13:10:24.468176   66128 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 13:10:24.468288   66128 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 13:10:24.502540   66128 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 13:10:24.502611   66128 ssh_runner.go:195] Run: which lz4
	I1202 13:10:24.507174   66128 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 13:10:24.511765   66128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 13:10:24.511791   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 13:10:26.824918   65727 pod_ready.go:103] pod "coredns-7c65d6cfc9-f8zzz" in "kube-system" namespace has status "Ready":"False"
	I1202 13:10:29.324565   65727 pod_ready.go:103] pod "coredns-7c65d6cfc9-f8zzz" in "kube-system" namespace has status "Ready":"False"
	I1202 13:10:25.937361   66128 crio.go:462] duration metric: took 1.430228604s to copy over tarball
	I1202 13:10:25.937438   66128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 13:10:28.238531   66128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301066021s)
	I1202 13:10:28.238561   66128 crio.go:469] duration metric: took 2.301166444s to extract the tarball
	I1202 13:10:28.238568   66128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 13:10:28.277277   66128 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 13:10:28.320418   66128 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 13:10:28.320449   66128 cache_images.go:84] Images are preloaded, skipping loading
	I1202 13:10:28.320461   66128 kubeadm.go:934] updating node { 192.168.61.241 8443 v1.31.2 crio true true} ...
	I1202 13:10:28.320584   66128 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-256954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kindnet-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1202 13:10:28.320677   66128 ssh_runner.go:195] Run: crio config
	I1202 13:10:28.367180   66128 cni.go:84] Creating CNI manager for "kindnet"
	I1202 13:10:28.367204   66128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 13:10:28.367224   66128 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.241 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-256954 NodeName:kindnet-256954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 13:10:28.367340   66128 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-256954"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.241"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.241"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 13:10:28.367428   66128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 13:10:28.377566   66128 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 13:10:28.377635   66128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 13:10:28.387112   66128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1202 13:10:28.403857   66128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 13:10:28.421452   66128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1202 13:10:28.439160   66128 ssh_runner.go:195] Run: grep 192.168.61.241	control-plane.minikube.internal$ /etc/hosts
	I1202 13:10:28.443091   66128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 13:10:28.454804   66128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 13:10:28.573113   66128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 13:10:28.591218   66128 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954 for IP: 192.168.61.241
	I1202 13:10:28.591236   66128 certs.go:194] generating shared ca certs ...
	I1202 13:10:28.591250   66128 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:28.591405   66128 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 13:10:28.591455   66128 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 13:10:28.591468   66128 certs.go:256] generating profile certs ...
	I1202 13:10:28.591529   66128 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/client.key
	I1202 13:10:28.591556   66128 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/client.crt with IP's: []
	I1202 13:10:28.789564   66128 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/client.crt ...
	I1202 13:10:28.789591   66128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/client.crt: {Name:mk66c8908d6225ac3b27116712968f803ecc05c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:28.789747   66128 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/client.key ...
	I1202 13:10:28.789758   66128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/client.key: {Name:mkffa59f861e7d7ab884b43b7472327043ff740b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:28.789845   66128 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/apiserver.key.9d0988d5
	I1202 13:10:28.789860   66128 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/apiserver.crt.9d0988d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.241]
	I1202 13:10:29.147218   66128 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/apiserver.crt.9d0988d5 ...
	I1202 13:10:29.147244   66128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/apiserver.crt.9d0988d5: {Name:mkbbbbd0d254ad4e42276a18d1e2868a2f3e41af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:29.147394   66128 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/apiserver.key.9d0988d5 ...
	I1202 13:10:29.147406   66128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/apiserver.key.9d0988d5: {Name:mk1b117e5f677687b42dfb63ef4ffc56df61ce06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:29.147474   66128 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/apiserver.crt.9d0988d5 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/apiserver.crt
	I1202 13:10:29.147575   66128 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/apiserver.key.9d0988d5 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/apiserver.key
	I1202 13:10:29.147666   66128 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/proxy-client.key
	I1202 13:10:29.147690   66128 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/proxy-client.crt with IP's: []
	I1202 13:10:29.254677   66128 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/proxy-client.crt ...
	I1202 13:10:29.254700   66128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/proxy-client.crt: {Name:mkc1f58a1de5d86d9b5c8c63ea4d2a2255981344 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:29.254847   66128 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/proxy-client.key ...
	I1202 13:10:29.254861   66128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/proxy-client.key: {Name:mkafbebba082d63a5885963ffa2c37960fbb2d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:29.255034   66128 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 13:10:29.255070   66128 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 13:10:29.255079   66128 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 13:10:29.255113   66128 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 13:10:29.255137   66128 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 13:10:29.255168   66128 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 13:10:29.255204   66128 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 13:10:29.255733   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 13:10:29.316066   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 13:10:29.352432   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 13:10:29.383166   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 13:10:29.408786   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 13:10:29.434017   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 13:10:29.461100   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 13:10:29.488265   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/kindnet-256954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 13:10:29.513759   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 13:10:29.540637   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 13:10:29.568014   66128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 13:10:29.592511   66128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 13:10:29.610268   66128 ssh_runner.go:195] Run: openssl version
	I1202 13:10:29.617194   66128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 13:10:29.628968   66128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 13:10:29.633552   66128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 13:10:29.633600   66128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 13:10:29.639406   66128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 13:10:29.650458   66128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 13:10:29.663264   66128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 13:10:29.668142   66128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 13:10:29.668207   66128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 13:10:29.675940   66128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 13:10:29.688217   66128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 13:10:29.699282   66128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 13:10:29.704354   66128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 13:10:29.704411   66128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 13:10:29.710540   66128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 13:10:29.721970   66128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 13:10:29.726368   66128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 13:10:29.726425   66128 kubeadm.go:392] StartCluster: {Name:kindnet-256954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:kindnet-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 13:10:29.726486   66128 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 13:10:29.726530   66128 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 13:10:29.769525   66128 cri.go:89] found id: ""
	I1202 13:10:29.769602   66128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 13:10:29.781200   66128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 13:10:29.790815   66128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 13:10:29.800823   66128 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 13:10:29.800846   66128 kubeadm.go:157] found existing configuration files:
	
	I1202 13:10:29.800888   66128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 13:10:29.809490   66128 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 13:10:29.809548   66128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 13:10:29.819277   66128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 13:10:29.828157   66128 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 13:10:29.828201   66128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 13:10:29.837428   66128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 13:10:29.846895   66128 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 13:10:29.846953   66128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 13:10:29.856329   66128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 13:10:29.866954   66128 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 13:10:29.867010   66128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 13:10:29.876557   66128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 13:10:30.048860   66128 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 13:10:31.342403   65727 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-f8zzz" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-f8zzz" not found
	I1202 13:10:31.342427   65727 pod_ready.go:82] duration metric: took 12.527181214s for pod "coredns-7c65d6cfc9-f8zzz" in "kube-system" namespace to be "Ready" ...
	E1202 13:10:31.342437   65727 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-f8zzz" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-f8zzz" not found
	I1202 13:10:31.342445   65727 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-sc7cx" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:33.348672   65727 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc7cx" in "kube-system" namespace has status "Ready":"False"
	I1202 13:10:40.161321   66128 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 13:10:40.161397   66128 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 13:10:40.161498   66128 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 13:10:40.161584   66128 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 13:10:40.161706   66128 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 13:10:40.161790   66128 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 13:10:40.163288   66128 out.go:235]   - Generating certificates and keys ...
	I1202 13:10:40.163385   66128 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 13:10:40.163471   66128 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 13:10:40.163567   66128 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 13:10:40.163685   66128 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 13:10:40.163796   66128 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 13:10:40.163880   66128 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 13:10:40.163957   66128 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 13:10:40.164132   66128 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-256954 localhost] and IPs [192.168.61.241 127.0.0.1 ::1]
	I1202 13:10:40.164211   66128 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 13:10:40.164438   66128 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-256954 localhost] and IPs [192.168.61.241 127.0.0.1 ::1]
	I1202 13:10:40.164540   66128 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 13:10:40.164629   66128 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 13:10:40.164684   66128 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 13:10:40.164757   66128 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 13:10:40.164832   66128 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 13:10:40.164907   66128 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 13:10:40.164999   66128 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 13:10:40.165090   66128 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 13:10:40.165182   66128 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 13:10:40.165304   66128 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 13:10:40.165400   66128 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 13:10:40.166623   66128 out.go:235]   - Booting up control plane ...
	I1202 13:10:40.166736   66128 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 13:10:40.166854   66128 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 13:10:40.166942   66128 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 13:10:40.167103   66128 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 13:10:40.167228   66128 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 13:10:40.167265   66128 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 13:10:40.167372   66128 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 13:10:40.167464   66128 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 13:10:40.167555   66128 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.894619ms
	I1202 13:10:40.167658   66128 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 13:10:40.167730   66128 kubeadm.go:310] [api-check] The API server is healthy after 5.001379473s
	I1202 13:10:40.167860   66128 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 13:10:40.167969   66128 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 13:10:40.168021   66128 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 13:10:40.168265   66128 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-256954 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 13:10:40.168347   66128 kubeadm.go:310] [bootstrap-token] Using token: 1yn8ox.77u0si9nxqtgl61p
	I1202 13:10:40.169543   66128 out.go:235]   - Configuring RBAC rules ...
	I1202 13:10:40.169670   66128 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 13:10:40.169778   66128 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 13:10:40.169953   66128 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 13:10:40.170157   66128 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 13:10:40.170336   66128 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 13:10:40.170463   66128 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 13:10:40.170628   66128 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 13:10:40.170692   66128 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 13:10:40.170757   66128 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 13:10:40.170767   66128 kubeadm.go:310] 
	I1202 13:10:40.170847   66128 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 13:10:40.170858   66128 kubeadm.go:310] 
	I1202 13:10:40.170945   66128 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 13:10:40.170954   66128 kubeadm.go:310] 
	I1202 13:10:40.170975   66128 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 13:10:40.171026   66128 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 13:10:40.171069   66128 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 13:10:40.171075   66128 kubeadm.go:310] 
	I1202 13:10:40.171119   66128 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 13:10:40.171124   66128 kubeadm.go:310] 
	I1202 13:10:40.171167   66128 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 13:10:40.171179   66128 kubeadm.go:310] 
	I1202 13:10:40.171237   66128 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 13:10:40.171332   66128 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 13:10:40.171418   66128 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 13:10:40.171429   66128 kubeadm.go:310] 
	I1202 13:10:40.171499   66128 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 13:10:40.171559   66128 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 13:10:40.171565   66128 kubeadm.go:310] 
	I1202 13:10:40.171633   66128 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1yn8ox.77u0si9nxqtgl61p \
	I1202 13:10:40.171761   66128 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 13:10:40.171793   66128 kubeadm.go:310] 	--control-plane 
	I1202 13:10:40.171802   66128 kubeadm.go:310] 
	I1202 13:10:40.171926   66128 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 13:10:40.171937   66128 kubeadm.go:310] 
	I1202 13:10:40.172051   66128 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1yn8ox.77u0si9nxqtgl61p \
	I1202 13:10:40.172211   66128 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 13:10:40.172245   66128 cni.go:84] Creating CNI manager for "kindnet"
	I1202 13:10:40.173558   66128 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1202 13:10:35.349055   65727 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc7cx" in "kube-system" namespace has status "Ready":"False"
	I1202 13:10:37.848594   65727 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc7cx" in "kube-system" namespace has status "Ready":"False"
	I1202 13:10:39.850139   65727 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc7cx" in "kube-system" namespace has status "Ready":"False"
	I1202 13:10:40.174631   66128 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 13:10:40.181067   66128 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1202 13:10:40.181082   66128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 13:10:40.202036   66128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 13:10:40.502016   66128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 13:10:40.502096   66128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:40.502154   66128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-256954 minikube.k8s.io/updated_at=2024_12_02T13_10_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=kindnet-256954 minikube.k8s.io/primary=true
	I1202 13:10:40.659317   66128 ops.go:34] apiserver oom_adj: -16
	I1202 13:10:40.659447   66128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:41.160466   66128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:41.660373   66128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:42.159886   66128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:42.660172   66128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:43.160337   66128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:43.660320   66128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:44.160453   66128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:44.659688   66128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:10:44.815170   66128 kubeadm.go:1113] duration metric: took 4.313131341s to wait for elevateKubeSystemPrivileges
	I1202 13:10:44.815207   66128 kubeadm.go:394] duration metric: took 15.088790008s to StartCluster
	I1202 13:10:44.815223   66128 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:44.815287   66128 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 13:10:44.817888   66128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:10:44.818260   66128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 13:10:44.818287   66128 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.241 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 13:10:44.818335   66128 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 13:10:44.818442   66128 addons.go:69] Setting storage-provisioner=true in profile "kindnet-256954"
	I1202 13:10:44.818467   66128 addons.go:234] Setting addon storage-provisioner=true in "kindnet-256954"
	I1202 13:10:44.818517   66128 config.go:182] Loaded profile config "kindnet-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:10:44.818536   66128 host.go:66] Checking if "kindnet-256954" exists ...
	I1202 13:10:44.818480   66128 addons.go:69] Setting default-storageclass=true in profile "kindnet-256954"
	I1202 13:10:44.818589   66128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-256954"
	I1202 13:10:44.819384   66128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:10:44.819483   66128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:10:44.819560   66128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:10:44.819430   66128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:10:44.822338   66128 out.go:177] * Verifying Kubernetes components...
	I1202 13:10:44.823604   66128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 13:10:44.835839   66128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41075
	I1202 13:10:44.836337   66128 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:10:44.836828   66128 main.go:141] libmachine: Using API Version  1
	I1202 13:10:44.836857   66128 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:10:44.837221   66128 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:10:44.837425   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetState
	I1202 13:10:44.839593   66128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I1202 13:10:44.839984   66128 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:10:44.840490   66128 main.go:141] libmachine: Using API Version  1
	I1202 13:10:44.840513   66128 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:10:44.840877   66128 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:10:44.841371   66128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:10:44.841411   66128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:10:44.841530   66128 addons.go:234] Setting addon default-storageclass=true in "kindnet-256954"
	I1202 13:10:44.841564   66128 host.go:66] Checking if "kindnet-256954" exists ...
	I1202 13:10:44.841922   66128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:10:44.841967   66128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:10:44.858840   66128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41287
	I1202 13:10:44.859130   66128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
	I1202 13:10:44.859309   66128 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:10:44.859620   66128 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:10:44.859776   66128 main.go:141] libmachine: Using API Version  1
	I1202 13:10:44.859785   66128 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:10:44.860210   66128 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:10:44.860378   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetState
	I1202 13:10:44.860406   66128 main.go:141] libmachine: Using API Version  1
	I1202 13:10:44.860425   66128 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:10:44.860977   66128 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:10:44.861566   66128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:10:44.861599   66128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:10:44.862074   66128 main.go:141] libmachine: (kindnet-256954) Calling .DriverName
	I1202 13:10:44.864028   66128 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 13:10:42.348735   65727 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc7cx" in "kube-system" namespace has status "Ready":"False"
	I1202 13:10:44.852596   65727 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc7cx" in "kube-system" namespace has status "Ready":"False"
	I1202 13:10:44.865473   66128 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 13:10:44.865494   66128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 13:10:44.865513   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:44.868455   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:44.868875   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:44.868899   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:44.869124   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHPort
	I1202 13:10:44.869311   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:44.869452   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHUsername
	I1202 13:10:44.869600   66128 sshutil.go:53] new ssh client: &{IP:192.168.61.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954/id_rsa Username:docker}
	I1202 13:10:44.877316   66128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
	I1202 13:10:44.877640   66128 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:10:44.878280   66128 main.go:141] libmachine: Using API Version  1
	I1202 13:10:44.878303   66128 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:10:44.878628   66128 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:10:44.878807   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetState
	I1202 13:10:44.880125   66128 main.go:141] libmachine: (kindnet-256954) Calling .DriverName
	I1202 13:10:44.880324   66128 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 13:10:44.880347   66128 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 13:10:44.880362   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHHostname
	I1202 13:10:44.883324   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:44.883667   66128 main.go:141] libmachine: (kindnet-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:c8:76", ip: ""} in network mk-kindnet-256954: {Iface:virbr1 ExpiryTime:2024-12-02 14:10:12 +0000 UTC Type:0 Mac:52:54:00:5b:c8:76 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:kindnet-256954 Clientid:01:52:54:00:5b:c8:76}
	I1202 13:10:44.883686   66128 main.go:141] libmachine: (kindnet-256954) DBG | domain kindnet-256954 has defined IP address 192.168.61.241 and MAC address 52:54:00:5b:c8:76 in network mk-kindnet-256954
	I1202 13:10:44.883843   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHPort
	I1202 13:10:44.883975   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHKeyPath
	I1202 13:10:44.884083   66128 main.go:141] libmachine: (kindnet-256954) Calling .GetSSHUsername
	I1202 13:10:44.884170   66128 sshutil.go:53] new ssh client: &{IP:192.168.61.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/kindnet-256954/id_rsa Username:docker}
	I1202 13:10:45.133895   66128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 13:10:45.134109   66128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 13:10:45.202379   66128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 13:10:45.233571   66128 node_ready.go:35] waiting up to 15m0s for node "kindnet-256954" to be "Ready" ...
	I1202 13:10:45.342325   66128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 13:10:45.795272   66128 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1202 13:10:45.795380   66128 main.go:141] libmachine: Making call to close driver server
	I1202 13:10:45.795412   66128 main.go:141] libmachine: (kindnet-256954) Calling .Close
	I1202 13:10:45.795726   66128 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:10:45.795745   66128 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:10:45.795754   66128 main.go:141] libmachine: Making call to close driver server
	I1202 13:10:45.795776   66128 main.go:141] libmachine: (kindnet-256954) Calling .Close
	I1202 13:10:45.795984   66128 main.go:141] libmachine: (kindnet-256954) DBG | Closing plugin on server side
	I1202 13:10:45.795992   66128 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:10:45.796008   66128 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:10:45.811789   66128 main.go:141] libmachine: Making call to close driver server
	I1202 13:10:45.811807   66128 main.go:141] libmachine: (kindnet-256954) Calling .Close
	I1202 13:10:45.812048   66128 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:10:45.812063   66128 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:10:46.044093   66128 main.go:141] libmachine: Making call to close driver server
	I1202 13:10:46.044115   66128 main.go:141] libmachine: (kindnet-256954) Calling .Close
	I1202 13:10:46.044583   66128 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:10:46.044599   66128 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:10:46.044610   66128 main.go:141] libmachine: Making call to close driver server
	I1202 13:10:46.044620   66128 main.go:141] libmachine: (kindnet-256954) Calling .Close
	I1202 13:10:46.044835   66128 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:10:46.044860   66128 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:10:46.044875   66128 main.go:141] libmachine: (kindnet-256954) DBG | Closing plugin on server side
	I1202 13:10:46.046974   66128 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1202 13:10:47.349598   65727 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc7cx" in "kube-system" namespace has status "Ready":"False"
	I1202 13:10:49.848700   65727 pod_ready.go:93] pod "coredns-7c65d6cfc9-sc7cx" in "kube-system" namespace has status "Ready":"True"
	I1202 13:10:49.848742   65727 pod_ready.go:82] duration metric: took 18.506290093s for pod "coredns-7c65d6cfc9-sc7cx" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:49.848761   65727 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:49.853300   65727 pod_ready.go:93] pod "etcd-auto-256954" in "kube-system" namespace has status "Ready":"True"
	I1202 13:10:49.853319   65727 pod_ready.go:82] duration metric: took 4.552654ms for pod "etcd-auto-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:49.853326   65727 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:49.857243   65727 pod_ready.go:93] pod "kube-apiserver-auto-256954" in "kube-system" namespace has status "Ready":"True"
	I1202 13:10:49.857259   65727 pod_ready.go:82] duration metric: took 3.926671ms for pod "kube-apiserver-auto-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:49.857267   65727 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:49.861975   65727 pod_ready.go:93] pod "kube-controller-manager-auto-256954" in "kube-system" namespace has status "Ready":"True"
	I1202 13:10:49.861991   65727 pod_ready.go:82] duration metric: took 4.717782ms for pod "kube-controller-manager-auto-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:49.861998   65727 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-fz4xd" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:49.865977   65727 pod_ready.go:93] pod "kube-proxy-fz4xd" in "kube-system" namespace has status "Ready":"True"
	I1202 13:10:49.865994   65727 pod_ready.go:82] duration metric: took 3.990451ms for pod "kube-proxy-fz4xd" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:49.866004   65727 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:50.246751   65727 pod_ready.go:93] pod "kube-scheduler-auto-256954" in "kube-system" namespace has status "Ready":"True"
	I1202 13:10:50.246774   65727 pod_ready.go:82] duration metric: took 380.762465ms for pod "kube-scheduler-auto-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:50.246783   65727 pod_ready.go:39] duration metric: took 31.442613694s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:10:50.246801   65727 api_server.go:52] waiting for apiserver process to appear ...
	I1202 13:10:50.246858   65727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 13:10:50.262482   65727 api_server.go:72] duration metric: took 32.411087026s to wait for apiserver process to appear ...
	I1202 13:10:50.262500   65727 api_server.go:88] waiting for apiserver healthz status ...
	I1202 13:10:50.262516   65727 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I1202 13:10:50.266660   65727 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I1202 13:10:50.267545   65727 api_server.go:141] control plane version: v1.31.2
	I1202 13:10:50.267564   65727 api_server.go:131] duration metric: took 5.058676ms to wait for apiserver health ...
	I1202 13:10:50.267571   65727 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 13:10:46.048017   66128 addons.go:510] duration metric: took 1.229685792s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1202 13:10:46.300518   66128 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-256954" context rescaled to 1 replicas
	I1202 13:10:47.237150   66128 node_ready.go:53] node "kindnet-256954" has status "Ready":"False"
	I1202 13:10:49.737705   66128 node_ready.go:53] node "kindnet-256954" has status "Ready":"False"
	I1202 13:10:50.450156   65727 system_pods.go:59] 7 kube-system pods found
	I1202 13:10:50.450186   65727 system_pods.go:61] "coredns-7c65d6cfc9-sc7cx" [6d6e68f7-cdde-4ce2-903d-356f0ff0e682] Running
	I1202 13:10:50.450191   65727 system_pods.go:61] "etcd-auto-256954" [8fee1a4d-a883-445b-8155-7c1d379584fa] Running
	I1202 13:10:50.450195   65727 system_pods.go:61] "kube-apiserver-auto-256954" [aa399a3a-30ae-4d4b-8823-7e1d1f5f635b] Running
	I1202 13:10:50.450198   65727 system_pods.go:61] "kube-controller-manager-auto-256954" [7246f060-6d4b-4c83-ae71-e8d9be5b27a7] Running
	I1202 13:10:50.450201   65727 system_pods.go:61] "kube-proxy-fz4xd" [93ad63e6-cacf-41a1-87ce-b7919ffa05fc] Running
	I1202 13:10:50.450205   65727 system_pods.go:61] "kube-scheduler-auto-256954" [521e2f30-5c0f-4a26-ad75-8d2526689b21] Running
	I1202 13:10:50.450207   65727 system_pods.go:61] "storage-provisioner" [55a8f6f1-498d-478b-b758-1eaadd010768] Running
	I1202 13:10:50.450213   65727 system_pods.go:74] duration metric: took 182.63719ms to wait for pod list to return data ...
	I1202 13:10:50.450220   65727 default_sa.go:34] waiting for default service account to be created ...
	I1202 13:10:50.646725   65727 default_sa.go:45] found service account: "default"
	I1202 13:10:50.646752   65727 default_sa.go:55] duration metric: took 196.526774ms for default service account to be created ...
	I1202 13:10:50.646761   65727 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 13:10:50.847949   65727 system_pods.go:86] 7 kube-system pods found
	I1202 13:10:50.847976   65727 system_pods.go:89] "coredns-7c65d6cfc9-sc7cx" [6d6e68f7-cdde-4ce2-903d-356f0ff0e682] Running
	I1202 13:10:50.847981   65727 system_pods.go:89] "etcd-auto-256954" [8fee1a4d-a883-445b-8155-7c1d379584fa] Running
	I1202 13:10:50.847985   65727 system_pods.go:89] "kube-apiserver-auto-256954" [aa399a3a-30ae-4d4b-8823-7e1d1f5f635b] Running
	I1202 13:10:50.847988   65727 system_pods.go:89] "kube-controller-manager-auto-256954" [7246f060-6d4b-4c83-ae71-e8d9be5b27a7] Running
	I1202 13:10:50.847991   65727 system_pods.go:89] "kube-proxy-fz4xd" [93ad63e6-cacf-41a1-87ce-b7919ffa05fc] Running
	I1202 13:10:50.847994   65727 system_pods.go:89] "kube-scheduler-auto-256954" [521e2f30-5c0f-4a26-ad75-8d2526689b21] Running
	I1202 13:10:50.847997   65727 system_pods.go:89] "storage-provisioner" [55a8f6f1-498d-478b-b758-1eaadd010768] Running
	I1202 13:10:50.848003   65727 system_pods.go:126] duration metric: took 201.236908ms to wait for k8s-apps to be running ...
	I1202 13:10:50.848009   65727 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 13:10:50.848061   65727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 13:10:50.864710   65727 system_svc.go:56] duration metric: took 16.692586ms WaitForService to wait for kubelet
	I1202 13:10:50.864738   65727 kubeadm.go:582] duration metric: took 33.013344251s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 13:10:50.864760   65727 node_conditions.go:102] verifying NodePressure condition ...
	I1202 13:10:51.046867   65727 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 13:10:51.046892   65727 node_conditions.go:123] node cpu capacity is 2
	I1202 13:10:51.046905   65727 node_conditions.go:105] duration metric: took 182.138444ms to run NodePressure ...
	I1202 13:10:51.046915   65727 start.go:241] waiting for startup goroutines ...
	I1202 13:10:51.046921   65727 start.go:246] waiting for cluster config update ...
	I1202 13:10:51.046930   65727 start.go:255] writing updated cluster config ...
	I1202 13:10:51.047171   65727 ssh_runner.go:195] Run: rm -f paused
	I1202 13:10:51.093712   65727 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 13:10:51.095606   65727 out.go:177] * Done! kubectl is now configured to use "auto-256954" cluster and "default" namespace by default
	I1202 13:10:51.739069   66128 node_ready.go:53] node "kindnet-256954" has status "Ready":"False"
	I1202 13:10:53.739258   66128 node_ready.go:53] node "kindnet-256954" has status "Ready":"False"
	I1202 13:10:56.236671   66128 node_ready.go:53] node "kindnet-256954" has status "Ready":"False"
	I1202 13:10:58.237414   66128 node_ready.go:53] node "kindnet-256954" has status "Ready":"False"
	I1202 13:10:59.237876   66128 node_ready.go:49] node "kindnet-256954" has status "Ready":"True"
	I1202 13:10:59.237903   66128 node_ready.go:38] duration metric: took 14.004305443s for node "kindnet-256954" to be "Ready" ...
	I1202 13:10:59.237914   66128 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:10:59.251339   66128 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-d897q" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:59.757811   66128 pod_ready.go:93] pod "coredns-7c65d6cfc9-d897q" in "kube-system" namespace has status "Ready":"True"
	I1202 13:10:59.757833   66128 pod_ready.go:82] duration metric: took 506.465286ms for pod "coredns-7c65d6cfc9-d897q" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:59.757845   66128 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:59.762008   66128 pod_ready.go:93] pod "etcd-kindnet-256954" in "kube-system" namespace has status "Ready":"True"
	I1202 13:10:59.762027   66128 pod_ready.go:82] duration metric: took 4.174547ms for pod "etcd-kindnet-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:59.762039   66128 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:59.766598   66128 pod_ready.go:93] pod "kube-apiserver-kindnet-256954" in "kube-system" namespace has status "Ready":"True"
	I1202 13:10:59.766618   66128 pod_ready.go:82] duration metric: took 4.570322ms for pod "kube-apiserver-kindnet-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:59.766628   66128 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:59.770708   66128 pod_ready.go:93] pod "kube-controller-manager-kindnet-256954" in "kube-system" namespace has status "Ready":"True"
	I1202 13:10:59.770722   66128 pod_ready.go:82] duration metric: took 4.087073ms for pod "kube-controller-manager-kindnet-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:10:59.770729   66128 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-cpbk7" in "kube-system" namespace to be "Ready" ...
	I1202 13:11:00.037307   66128 pod_ready.go:93] pod "kube-proxy-cpbk7" in "kube-system" namespace has status "Ready":"True"
	I1202 13:11:00.037329   66128 pod_ready.go:82] duration metric: took 266.594513ms for pod "kube-proxy-cpbk7" in "kube-system" namespace to be "Ready" ...
	I1202 13:11:00.037338   66128 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:11:00.437958   66128 pod_ready.go:93] pod "kube-scheduler-kindnet-256954" in "kube-system" namespace has status "Ready":"True"
	I1202 13:11:00.437980   66128 pod_ready.go:82] duration metric: took 400.636108ms for pod "kube-scheduler-kindnet-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:11:00.437990   66128 pod_ready.go:39] duration metric: took 1.20006148s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:11:00.438004   66128 api_server.go:52] waiting for apiserver process to appear ...
	I1202 13:11:00.438050   66128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 13:11:00.454188   66128 api_server.go:72] duration metric: took 15.635865014s to wait for apiserver process to appear ...
	I1202 13:11:00.454208   66128 api_server.go:88] waiting for apiserver healthz status ...
	I1202 13:11:00.454230   66128 api_server.go:253] Checking apiserver healthz at https://192.168.61.241:8443/healthz ...
	I1202 13:11:00.458769   66128 api_server.go:279] https://192.168.61.241:8443/healthz returned 200:
	ok
	I1202 13:11:00.459712   66128 api_server.go:141] control plane version: v1.31.2
	I1202 13:11:00.459729   66128 api_server.go:131] duration metric: took 5.516074ms to wait for apiserver health ...
	I1202 13:11:00.459736   66128 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 13:11:00.641015   66128 system_pods.go:59] 8 kube-system pods found
	I1202 13:11:00.641046   66128 system_pods.go:61] "coredns-7c65d6cfc9-d897q" [083f9633-c3ea-4837-9f1d-3a8c9c7f9388] Running
	I1202 13:11:00.641053   66128 system_pods.go:61] "etcd-kindnet-256954" [2573de1f-324e-485c-8703-0e9a0d934950] Running
	I1202 13:11:00.641058   66128 system_pods.go:61] "kindnet-bh6xm" [101e08e6-e2c6-4061-ad6b-409679238547] Running
	I1202 13:11:00.641066   66128 system_pods.go:61] "kube-apiserver-kindnet-256954" [1db16b08-f4d9-44fe-bccd-357156ba4dbe] Running
	I1202 13:11:00.641072   66128 system_pods.go:61] "kube-controller-manager-kindnet-256954" [b732ce0c-d39a-47d4-925b-f2d5d79014f9] Running
	I1202 13:11:00.641076   66128 system_pods.go:61] "kube-proxy-cpbk7" [87818296-94f7-4ba0-af65-8fe05a09e89f] Running
	I1202 13:11:00.641080   66128 system_pods.go:61] "kube-scheduler-kindnet-256954" [028858ea-94f9-4636-8af5-fe2ffc5c8085] Running
	I1202 13:11:00.641085   66128 system_pods.go:61] "storage-provisioner" [b248bc7d-803c-4197-b0b5-ecf267ca2f69] Running
	I1202 13:11:00.641092   66128 system_pods.go:74] duration metric: took 181.351014ms to wait for pod list to return data ...
	I1202 13:11:00.641101   66128 default_sa.go:34] waiting for default service account to be created ...
	I1202 13:11:00.837619   66128 default_sa.go:45] found service account: "default"
	I1202 13:11:00.837644   66128 default_sa.go:55] duration metric: took 196.537202ms for default service account to be created ...
	I1202 13:11:00.837655   66128 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 13:11:01.039476   66128 system_pods.go:86] 8 kube-system pods found
	I1202 13:11:01.039500   66128 system_pods.go:89] "coredns-7c65d6cfc9-d897q" [083f9633-c3ea-4837-9f1d-3a8c9c7f9388] Running
	I1202 13:11:01.039505   66128 system_pods.go:89] "etcd-kindnet-256954" [2573de1f-324e-485c-8703-0e9a0d934950] Running
	I1202 13:11:01.039509   66128 system_pods.go:89] "kindnet-bh6xm" [101e08e6-e2c6-4061-ad6b-409679238547] Running
	I1202 13:11:01.039513   66128 system_pods.go:89] "kube-apiserver-kindnet-256954" [1db16b08-f4d9-44fe-bccd-357156ba4dbe] Running
	I1202 13:11:01.039516   66128 system_pods.go:89] "kube-controller-manager-kindnet-256954" [b732ce0c-d39a-47d4-925b-f2d5d79014f9] Running
	I1202 13:11:01.039519   66128 system_pods.go:89] "kube-proxy-cpbk7" [87818296-94f7-4ba0-af65-8fe05a09e89f] Running
	I1202 13:11:01.039523   66128 system_pods.go:89] "kube-scheduler-kindnet-256954" [028858ea-94f9-4636-8af5-fe2ffc5c8085] Running
	I1202 13:11:01.039528   66128 system_pods.go:89] "storage-provisioner" [b248bc7d-803c-4197-b0b5-ecf267ca2f69] Running
	I1202 13:11:01.039534   66128 system_pods.go:126] duration metric: took 201.873665ms to wait for k8s-apps to be running ...
	I1202 13:11:01.039542   66128 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 13:11:01.039582   66128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 13:11:01.054760   66128 system_svc.go:56] duration metric: took 15.210335ms WaitForService to wait for kubelet
	I1202 13:11:01.054783   66128 kubeadm.go:582] duration metric: took 16.236464148s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 13:11:01.054804   66128 node_conditions.go:102] verifying NodePressure condition ...
	I1202 13:11:01.237319   66128 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 13:11:01.237351   66128 node_conditions.go:123] node cpu capacity is 2
	I1202 13:11:01.237363   66128 node_conditions.go:105] duration metric: took 182.553729ms to run NodePressure ...
	I1202 13:11:01.237377   66128 start.go:241] waiting for startup goroutines ...
	I1202 13:11:01.237386   66128 start.go:246] waiting for cluster config update ...
	I1202 13:11:01.237398   66128 start.go:255] writing updated cluster config ...
	I1202 13:11:01.237723   66128 ssh_runner.go:195] Run: rm -f paused
	I1202 13:11:01.283627   66128 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 13:11:01.285796   66128 out.go:177] * Done! kubectl is now configured to use "kindnet-256954" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.671731661Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=402b7b2a-1283-40b5-a3fc-df409dd9c127 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.673808486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=172b4caf-ae04-44b4-ab5a-9399d6128970 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.674543290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145071674498976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=172b4caf-ae04-44b4-ab5a-9399d6128970 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.675486214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=324d0cfd-a836-41dd-b64d-5d1e54c6968f name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.675591239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=324d0cfd-a836-41dd-b64d-5d1e54c6968f name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.675894743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c76c8542ddb2ce9e524530c6ad79d85c36c37c5d4119710797242d78cea690e,PodSandboxId:722462dbc74c301fa266d09c7ba590c167433a59d9b9c6912d0239c1a3338ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733144523742297137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8975d342-96fa-4173-b477-e25909ca76da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a33522f73a005d2628766c9032face2c7fa8861bc0c9d09ec979807351e9b2,PodSandboxId:976114a238b6804be12a7e2fa8070e45e1b21cd1182edec636f36738550adf1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523370291404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qfb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f41c48-90af-4524-98fc-22daf331fbcb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c38436dcda43a17cf61a8146b3c1d19f1a0ee1233489ecc29d5ff5c68fcd3e8,PodSandboxId:e31a2cd9e150c74180fb3121656a4ec47ed75c03625f9e84580488698f96d34f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523271914015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2stsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3cb9697b-974e-4f8e-9931-38fe3d971940,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a0bb9ef86b50ae744e65e5cd0a1a58f30aa67e7821baf187f638fd571cdafd,PodSandboxId:868c587c843e0c467fdc7a4a30aa1a348364c226b3dfe5a3d377b38c1aecb1c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733144522724749828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4vw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487da76d-2fae-4df0-b663-0cf128ae2911,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70644d4df6533d29560af5b8c39b457a48d4eb561c44a3cae2de93df9d2e95c,PodSandboxId:994247c1b27812d31d244c616f1dc451310ecfb18089e8125b9907cc2007ca1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173314451143922457
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5c277760a5f64606204d89db056873,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6650cc0efc8c6fac5cc33bc78272a3b147cb0d52f09a1f3c30c6bfff7810f03,PodSandboxId:4ba9e13405b6d88815f3548cdf171ece54ca9366355b0d5dd2f6eb4b0e475e08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144511436998108,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455f46ddd7a39b0e3d7b6679ac2233a36929bae665d35641d54404bd28617fd8,PodSandboxId:d60121c1e9a8e49d70b3cee0f6562ab3f2dbc4c5a7733b59363b9d45a591060a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144511370
135638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a43b5cbe13c8df408c11119c9d4af05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51b7d1118274da0fc4eca30648961649905ca73906bed87219bd3121ed7e762,PodSandboxId:fc1f450a1e85ee390ba9ad0b0008e329b4156dfafab7d5d26f622fa7835f27a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144
511390167276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270f8464e0274fe9b311de1ab931524e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce00f46dfc790742eb5b1ba5c98916aba95da8323ecca17fc80b4b215c872fab,PodSandboxId:b1076630eca6f3c871251e1767c4a977d7083129850e1a4fa05889e32ee96cf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733144225242177536,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=324d0cfd-a836-41dd-b64d-5d1e54c6968f name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.732989491Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a88f6ae-fae9-49eb-b902-cf39fddb8ec6 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.733130688Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a88f6ae-fae9-49eb-b902-cf39fddb8ec6 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.735376401Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=578a47e5-2aec-4728-bf01-4ae25ed7a5a5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.735806806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145071735783402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=578a47e5-2aec-4728-bf01-4ae25ed7a5a5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.736525038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b448577-2582-45c2-bc77-7e791aaf52ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.736593003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b448577-2582-45c2-bc77-7e791aaf52ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.736831833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c76c8542ddb2ce9e524530c6ad79d85c36c37c5d4119710797242d78cea690e,PodSandboxId:722462dbc74c301fa266d09c7ba590c167433a59d9b9c6912d0239c1a3338ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733144523742297137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8975d342-96fa-4173-b477-e25909ca76da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a33522f73a005d2628766c9032face2c7fa8861bc0c9d09ec979807351e9b2,PodSandboxId:976114a238b6804be12a7e2fa8070e45e1b21cd1182edec636f36738550adf1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523370291404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qfb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f41c48-90af-4524-98fc-22daf331fbcb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c38436dcda43a17cf61a8146b3c1d19f1a0ee1233489ecc29d5ff5c68fcd3e8,PodSandboxId:e31a2cd9e150c74180fb3121656a4ec47ed75c03625f9e84580488698f96d34f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523271914015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2stsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3cb9697b-974e-4f8e-9931-38fe3d971940,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a0bb9ef86b50ae744e65e5cd0a1a58f30aa67e7821baf187f638fd571cdafd,PodSandboxId:868c587c843e0c467fdc7a4a30aa1a348364c226b3dfe5a3d377b38c1aecb1c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733144522724749828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4vw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487da76d-2fae-4df0-b663-0cf128ae2911,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70644d4df6533d29560af5b8c39b457a48d4eb561c44a3cae2de93df9d2e95c,PodSandboxId:994247c1b27812d31d244c616f1dc451310ecfb18089e8125b9907cc2007ca1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173314451143922457
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5c277760a5f64606204d89db056873,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6650cc0efc8c6fac5cc33bc78272a3b147cb0d52f09a1f3c30c6bfff7810f03,PodSandboxId:4ba9e13405b6d88815f3548cdf171ece54ca9366355b0d5dd2f6eb4b0e475e08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144511436998108,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455f46ddd7a39b0e3d7b6679ac2233a36929bae665d35641d54404bd28617fd8,PodSandboxId:d60121c1e9a8e49d70b3cee0f6562ab3f2dbc4c5a7733b59363b9d45a591060a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144511370
135638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a43b5cbe13c8df408c11119c9d4af05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51b7d1118274da0fc4eca30648961649905ca73906bed87219bd3121ed7e762,PodSandboxId:fc1f450a1e85ee390ba9ad0b0008e329b4156dfafab7d5d26f622fa7835f27a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144
511390167276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270f8464e0274fe9b311de1ab931524e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce00f46dfc790742eb5b1ba5c98916aba95da8323ecca17fc80b4b215c872fab,PodSandboxId:b1076630eca6f3c871251e1767c4a977d7083129850e1a4fa05889e32ee96cf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733144225242177536,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b448577-2582-45c2-bc77-7e791aaf52ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.758258099Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7f80779f-5a5b-47be-8780-b5481d1269ef name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.758560796Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:722462dbc74c301fa266d09c7ba590c167433a59d9b9c6912d0239c1a3338ac7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8975d342-96fa-4173-b477-e25909ca76da,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144523420799090,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8975d342-96fa-4173-b477-e25909ca76da,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespac
e\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-02T13:02:03.110562536Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9c157520cf08c34425eb430ccaf7b9aa09c875698f57ca905bfdced3786d2cdf,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-tcr8r,Uid:2f017719-26ad-44ca-a44a-e6c20cd6438c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144523136673767,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-tcr8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f017719-26ad-44ca-a44a-e
6c20cd6438c,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T13:02:02.819957239Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:868c587c843e0c467fdc7a4a30aa1a348364c226b3dfe5a3d377b38c1aecb1c3,Metadata:&PodSandboxMetadata{Name:kube-proxy-d4vw4,Uid:487da76d-2fae-4df0-b663-0cf128ae2911,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144522464554514,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-d4vw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487da76d-2fae-4df0-b663-0cf128ae2911,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T13:02:01.549856500Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:976114a238b6804be12a7e2fa8070e45e1b21cd1182edec636f36738550adf1f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc
9-2qfb5,Uid:13f41c48-90af-4524-98fc-22daf331fbcb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144522280792454,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qfb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f41c48-90af-4524-98fc-22daf331fbcb,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T13:02:01.972625335Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e31a2cd9e150c74180fb3121656a4ec47ed75c03625f9e84580488698f96d34f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-2stsx,Uid:3cb9697b-974e-4f8e-9931-38fe3d971940,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144522268478949,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-2stsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cb9697b-974e-4f8e-9931-38fe3d971940,k8s-app: kube-dns,
pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T13:02:01.951284985Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc1f450a1e85ee390ba9ad0b0008e329b4156dfafab7d5d26f622fa7835f27a0,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-653783,Uid:270f8464e0274fe9b311de1ab931524e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144511204576527,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270f8464e0274fe9b311de1ab931524e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.154:2379,kubernetes.io/config.hash: 270f8464e0274fe9b311de1ab931524e,kubernetes.io/config.seen: 2024-12-02T13:01:50.753221751Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4ba9e13405b6d88815f3548cdf171ece54c
a9366355b0d5dd2f6eb4b0e475e08,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-653783,Uid:a0a5f9e2682f67fac1de53a495d621b8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733144511201050424,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.154:8444,kubernetes.io/config.hash: a0a5f9e2682f67fac1de53a495d621b8,kubernetes.io/config.seen: 2024-12-02T13:01:50.753216720Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:994247c1b27812d31d244c616f1dc451310ecfb18089e8125b9907cc2007ca1a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-653783,Uid:cf5c277760a5f64606204d89db056873,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1733144511193297767,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5c277760a5f64606204d89db056873,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cf5c277760a5f64606204d89db056873,kubernetes.io/config.seen: 2024-12-02T13:01:50.753220810Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d60121c1e9a8e49d70b3cee0f6562ab3f2dbc4c5a7733b59363b9d45a591060a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-653783,Uid:5a43b5cbe13c8df408c11119c9d4af05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144511190986024,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 5a43b5cbe13c8df408c11119c9d4af05,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5a43b5cbe13c8df408c11119c9d4af05,kubernetes.io/config.seen: 2024-12-02T13:01:50.753219880Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b1076630eca6f3c871251e1767c4a977d7083129850e1a4fa05889e32ee96cf4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-653783,Uid:a0a5f9e2682f67fac1de53a495d621b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1733144224301733631,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.154:8444,kubernetes.io/config.hash: a0a5f9e2682f67fac1de53a495d621b8,kubernetes.io/config.s
een: 2024-12-02T12:57:03.609888943Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7f80779f-5a5b-47be-8780-b5481d1269ef name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.759500250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f77a733-4ad9-4362-b8ff-fec2f90aab19 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.759571191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f77a733-4ad9-4362-b8ff-fec2f90aab19 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.759784092Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c76c8542ddb2ce9e524530c6ad79d85c36c37c5d4119710797242d78cea690e,PodSandboxId:722462dbc74c301fa266d09c7ba590c167433a59d9b9c6912d0239c1a3338ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733144523742297137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8975d342-96fa-4173-b477-e25909ca76da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a33522f73a005d2628766c9032face2c7fa8861bc0c9d09ec979807351e9b2,PodSandboxId:976114a238b6804be12a7e2fa8070e45e1b21cd1182edec636f36738550adf1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523370291404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qfb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f41c48-90af-4524-98fc-22daf331fbcb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c38436dcda43a17cf61a8146b3c1d19f1a0ee1233489ecc29d5ff5c68fcd3e8,PodSandboxId:e31a2cd9e150c74180fb3121656a4ec47ed75c03625f9e84580488698f96d34f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523271914015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2stsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3cb9697b-974e-4f8e-9931-38fe3d971940,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a0bb9ef86b50ae744e65e5cd0a1a58f30aa67e7821baf187f638fd571cdafd,PodSandboxId:868c587c843e0c467fdc7a4a30aa1a348364c226b3dfe5a3d377b38c1aecb1c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733144522724749828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4vw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487da76d-2fae-4df0-b663-0cf128ae2911,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70644d4df6533d29560af5b8c39b457a48d4eb561c44a3cae2de93df9d2e95c,PodSandboxId:994247c1b27812d31d244c616f1dc451310ecfb18089e8125b9907cc2007ca1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173314451143922457
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5c277760a5f64606204d89db056873,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6650cc0efc8c6fac5cc33bc78272a3b147cb0d52f09a1f3c30c6bfff7810f03,PodSandboxId:4ba9e13405b6d88815f3548cdf171ece54ca9366355b0d5dd2f6eb4b0e475e08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144511436998108,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455f46ddd7a39b0e3d7b6679ac2233a36929bae665d35641d54404bd28617fd8,PodSandboxId:d60121c1e9a8e49d70b3cee0f6562ab3f2dbc4c5a7733b59363b9d45a591060a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144511370
135638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a43b5cbe13c8df408c11119c9d4af05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51b7d1118274da0fc4eca30648961649905ca73906bed87219bd3121ed7e762,PodSandboxId:fc1f450a1e85ee390ba9ad0b0008e329b4156dfafab7d5d26f622fa7835f27a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144
511390167276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270f8464e0274fe9b311de1ab931524e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce00f46dfc790742eb5b1ba5c98916aba95da8323ecca17fc80b4b215c872fab,PodSandboxId:b1076630eca6f3c871251e1767c4a977d7083129850e1a4fa05889e32ee96cf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733144225242177536,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f77a733-4ad9-4362-b8ff-fec2f90aab19 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.782972303Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bda471d4-42f7-4f20-ba28-743fe0c24768 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.783168905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bda471d4-42f7-4f20-ba28-743fe0c24768 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.784463467Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04ad3652-7107-4668-9746-a7992f94a23c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.784989030Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145071784962205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04ad3652-7107-4668-9746-a7992f94a23c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.785880973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e18e2328-c952-4471-8cff-2056d6198dc9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.785962225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e18e2328-c952-4471-8cff-2056d6198dc9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:11 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:11:11.786286908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c76c8542ddb2ce9e524530c6ad79d85c36c37c5d4119710797242d78cea690e,PodSandboxId:722462dbc74c301fa266d09c7ba590c167433a59d9b9c6912d0239c1a3338ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733144523742297137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8975d342-96fa-4173-b477-e25909ca76da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a33522f73a005d2628766c9032face2c7fa8861bc0c9d09ec979807351e9b2,PodSandboxId:976114a238b6804be12a7e2fa8070e45e1b21cd1182edec636f36738550adf1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523370291404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qfb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f41c48-90af-4524-98fc-22daf331fbcb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c38436dcda43a17cf61a8146b3c1d19f1a0ee1233489ecc29d5ff5c68fcd3e8,PodSandboxId:e31a2cd9e150c74180fb3121656a4ec47ed75c03625f9e84580488698f96d34f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523271914015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2stsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3cb9697b-974e-4f8e-9931-38fe3d971940,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a0bb9ef86b50ae744e65e5cd0a1a58f30aa67e7821baf187f638fd571cdafd,PodSandboxId:868c587c843e0c467fdc7a4a30aa1a348364c226b3dfe5a3d377b38c1aecb1c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733144522724749828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4vw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487da76d-2fae-4df0-b663-0cf128ae2911,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70644d4df6533d29560af5b8c39b457a48d4eb561c44a3cae2de93df9d2e95c,PodSandboxId:994247c1b27812d31d244c616f1dc451310ecfb18089e8125b9907cc2007ca1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173314451143922457
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5c277760a5f64606204d89db056873,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6650cc0efc8c6fac5cc33bc78272a3b147cb0d52f09a1f3c30c6bfff7810f03,PodSandboxId:4ba9e13405b6d88815f3548cdf171ece54ca9366355b0d5dd2f6eb4b0e475e08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144511436998108,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455f46ddd7a39b0e3d7b6679ac2233a36929bae665d35641d54404bd28617fd8,PodSandboxId:d60121c1e9a8e49d70b3cee0f6562ab3f2dbc4c5a7733b59363b9d45a591060a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144511370
135638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a43b5cbe13c8df408c11119c9d4af05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51b7d1118274da0fc4eca30648961649905ca73906bed87219bd3121ed7e762,PodSandboxId:fc1f450a1e85ee390ba9ad0b0008e329b4156dfafab7d5d26f622fa7835f27a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144
511390167276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270f8464e0274fe9b311de1ab931524e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce00f46dfc790742eb5b1ba5c98916aba95da8323ecca17fc80b4b215c872fab,PodSandboxId:b1076630eca6f3c871251e1767c4a977d7083129850e1a4fa05889e32ee96cf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733144225242177536,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e18e2328-c952-4471-8cff-2056d6198dc9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2c76c8542ddb2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   722462dbc74c3       storage-provisioner
	e9a33522f73a0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   976114a238b68       coredns-7c65d6cfc9-2qfb5
	5c38436dcda43       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   e31a2cd9e150c       coredns-7c65d6cfc9-2stsx
	77a0bb9ef86b5       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   868c587c843e0       kube-proxy-d4vw4
	d70644d4df653       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   994247c1b2781       kube-scheduler-default-k8s-diff-port-653783
	d6650cc0efc8c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   4ba9e13405b6d       kube-apiserver-default-k8s-diff-port-653783
	c51b7d1118274       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   fc1f450a1e85e       etcd-default-k8s-diff-port-653783
	455f46ddd7a39       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   d60121c1e9a8e       kube-controller-manager-default-k8s-diff-port-653783
	ce00f46dfc790       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   b1076630eca6f       kube-apiserver-default-k8s-diff-port-653783
	
	
	==> coredns [5c38436dcda43a17cf61a8146b3c1d19f1a0ee1233489ecc29d5ff5c68fcd3e8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [e9a33522f73a005d2628766c9032face2c7fa8861bc0c9d09ec979807351e9b2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-653783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-653783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=default-k8s-diff-port-653783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T13_01_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 13:01:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-653783
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 13:11:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 13:07:13 +0000   Mon, 02 Dec 2024 13:01:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 13:07:13 +0000   Mon, 02 Dec 2024 13:01:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 13:07:13 +0000   Mon, 02 Dec 2024 13:01:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 13:07:13 +0000   Mon, 02 Dec 2024 13:01:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.154
	  Hostname:    default-k8s-diff-port-653783
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de64d5b8faf484ea5614ce3e9ffb71c
	  System UUID:                2de64d5b-8faf-484e-a561-4ce3e9ffb71c
	  Boot ID:                    ec4d4298-8c0e-4b7b-a674-67477f56d4bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-2qfb5                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-7c65d6cfc9-2stsx                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-default-k8s-diff-port-653783                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-default-k8s-diff-port-653783             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-653783    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-d4vw4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-default-k8s-diff-port-653783             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-tcr8r                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node default-k8s-diff-port-653783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node default-k8s-diff-port-653783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node default-k8s-diff-port-653783 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s                  kubelet          Node default-k8s-diff-port-653783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s                  kubelet          Node default-k8s-diff-port-653783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s                  kubelet          Node default-k8s-diff-port-653783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s                  node-controller  Node default-k8s-diff-port-653783 event: Registered Node default-k8s-diff-port-653783 in Controller
	
	
	==> dmesg <==
	[  +0.052605] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040870] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.955490] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.797235] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.618162] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.143698] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.057043] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062353] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.193578] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.111646] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.281603] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[Dec 2 12:57] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +1.945598] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.061287] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.542295] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.227070] kauditd_printk_skb: 85 callbacks suppressed
	[Dec 2 13:01] systemd-fstab-generator[2577]: Ignoring "noauto" option for root device
	[  +0.060123] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.998503] systemd-fstab-generator[2898]: Ignoring "noauto" option for root device
	[  +0.077285] kauditd_printk_skb: 54 callbacks suppressed
	[Dec 2 13:02] systemd-fstab-generator[3014]: Ignoring "noauto" option for root device
	[  +0.096968] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.974270] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [c51b7d1118274da0fc4eca30648961649905ca73906bed87219bd3121ed7e762] <==
	{"level":"info","ts":"2024-12-02T13:01:51.763808Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"10fb7b0a157fc334","initial-advertise-peer-urls":["https://192.168.39.154:2380"],"listen-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.154:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-02T13:01:51.763621Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-12-02T13:01:51.766626Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-12-02T13:01:51.764753Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-02T13:01:52.618476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-02T13:01:52.618530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-02T13:01:52.618561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgPreVoteResp from 10fb7b0a157fc334 at term 1"}
	{"level":"info","ts":"2024-12-02T13:01:52.618574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became candidate at term 2"}
	{"level":"info","ts":"2024-12-02T13:01:52.618579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgVoteResp from 10fb7b0a157fc334 at term 2"}
	{"level":"info","ts":"2024-12-02T13:01:52.618588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became leader at term 2"}
	{"level":"info","ts":"2024-12-02T13:01:52.618595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 10fb7b0a157fc334 elected leader 10fb7b0a157fc334 at term 2"}
	{"level":"info","ts":"2024-12-02T13:01:52.621548Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"10fb7b0a157fc334","local-member-attributes":"{Name:default-k8s-diff-port-653783 ClientURLs:[https://192.168.39.154:2379]}","request-path":"/0/members/10fb7b0a157fc334/attributes","cluster-id":"bd4b2769e12dd4ff","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-02T13:01:52.621672Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T13:01:52.622570Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T13:01:52.622681Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T13:01:52.624263Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-02T13:01:52.625972Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-02T13:01:52.628350Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-02T13:01:52.629527Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.154:2379"}
	{"level":"info","ts":"2024-12-02T13:01:52.630034Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-02T13:01:52.630104Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-02T13:01:52.631463Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd4b2769e12dd4ff","local-member-id":"10fb7b0a157fc334","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T13:01:52.631553Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T13:01:52.631607Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T13:10:04.490480Z","caller":"traceutil/trace.go:171","msg":"trace[468386243] transaction","detail":"{read_only:false; response_revision:878; number_of_response:1; }","duration":"182.773931ms","start":"2024-12-02T13:10:04.307635Z","end":"2024-12-02T13:10:04.490409Z","steps":["trace[468386243] 'process raft request'  (duration: 182.234734ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:11:12 up 14 min,  0 users,  load average: 0.07, 0.13, 0.11
	Linux default-k8s-diff-port-653783 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ce00f46dfc790742eb5b1ba5c98916aba95da8323ecca17fc80b4b215c872fab] <==
	W1202 13:01:45.209744       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.234837       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.253874       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.294157       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.374726       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.382233       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.415132       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.427885       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.430346       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.463857       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.468212       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.496627       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.506018       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.545623       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.557520       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.681750       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.682971       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.733979       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.738452       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.757818       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.823625       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.883666       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.944272       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.988445       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:46.287477       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d6650cc0efc8c6fac5cc33bc78272a3b147cb0d52f09a1f3c30c6bfff7810f03] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1202 13:06:55.086455       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:06:55.086475       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1202 13:06:55.087533       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:06:55.087619       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 13:07:55.088504       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:07:55.088773       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1202 13:07:55.088546       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:07:55.088957       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 13:07:55.090105       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:07:55.090111       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 13:09:55.090591       1 handler_proxy.go:99] no RequestInfo found in the context
	W1202 13:09:55.090886       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:09:55.091166       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1202 13:09:55.091120       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 13:09:55.092432       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:09:55.092552       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [455f46ddd7a39b0e3d7b6679ac2233a36929bae665d35641d54404bd28617fd8] <==
	E1202 13:06:01.098732       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:06:01.528197       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:06:31.106541       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:06:31.536010       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:07:01.113709       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:07:01.546197       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:07:13.950793       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-653783"
	E1202 13:07:31.120345       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:07:31.553423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:08:01.127724       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:08:01.561542       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:08:17.696050       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="341.541µs"
	E1202 13:08:31.134672       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:08:31.569911       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:08:32.693342       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="65.544µs"
	E1202 13:09:01.141800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:09:01.577671       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:09:31.148824       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:09:31.586964       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:10:01.158449       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:10:01.596910       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:10:31.167049       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:10:31.606269       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:11:01.173759       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:11:01.616713       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [77a0bb9ef86b50ae744e65e5cd0a1a58f30aa67e7821baf187f638fd571cdafd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1202 13:02:03.514300       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1202 13:02:03.591048       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
	E1202 13:02:03.591200       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 13:02:03.831236       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1202 13:02:03.831267       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 13:02:03.831299       1 server_linux.go:169] "Using iptables Proxier"
	I1202 13:02:03.859729       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 13:02:03.859953       1 server.go:483] "Version info" version="v1.31.2"
	I1202 13:02:03.859983       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 13:02:03.868932       1 config.go:199] "Starting service config controller"
	I1202 13:02:03.868978       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 13:02:03.869010       1 config.go:105] "Starting endpoint slice config controller"
	I1202 13:02:03.869047       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 13:02:03.876311       1 config.go:328] "Starting node config controller"
	I1202 13:02:03.897871       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 13:02:03.897902       1 shared_informer.go:320] Caches are synced for node config
	I1202 13:02:03.969159       1 shared_informer.go:320] Caches are synced for service config
	I1202 13:02:03.969358       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d70644d4df6533d29560af5b8c39b457a48d4eb561c44a3cae2de93df9d2e95c] <==
	W1202 13:01:54.109298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 13:01:54.109325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:54.109143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 13:01:54.109474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:54.109481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1202 13:01:54.109624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:54.986650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1202 13:01:54.986713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.036119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1202 13:01:55.036168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.080889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 13:01:55.080960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.103271       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 13:01:55.103500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.105620       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1202 13:01:55.105722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.187277       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1202 13:01:55.187327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.367476       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 13:01:55.367526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.391625       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1202 13:01:55.391943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.575705       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1202 13:01:55.575838       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1202 13:01:57.501696       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 13:10:04 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:04.676525    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tcr8r" podUID="2f017719-26ad-44ca-a44a-e6c20cd6438c"
	Dec 02 13:10:06 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:06.794904    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145006794172736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:06 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:06.795335    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145006794172736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:15 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:15.677433    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tcr8r" podUID="2f017719-26ad-44ca-a44a-e6c20cd6438c"
	Dec 02 13:10:16 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:16.797260    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145016796493249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:16 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:16.797313    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145016796493249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:26 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:26.799168    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145026798639780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:26 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:26.799210    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145026798639780,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:29 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:29.676498    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tcr8r" podUID="2f017719-26ad-44ca-a44a-e6c20cd6438c"
	Dec 02 13:10:36 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:36.802887    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145036801999403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:36 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:36.802912    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145036801999403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:42 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:42.677388    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tcr8r" podUID="2f017719-26ad-44ca-a44a-e6c20cd6438c"
	Dec 02 13:10:46 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:46.804864    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145046804605619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:46 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:46.804906    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145046804605619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:55 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:55.676611    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tcr8r" podUID="2f017719-26ad-44ca-a44a-e6c20cd6438c"
	Dec 02 13:10:56 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:56.714438    2905 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 02 13:10:56 default-k8s-diff-port-653783 kubelet[2905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 13:10:56 default-k8s-diff-port-653783 kubelet[2905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 13:10:56 default-k8s-diff-port-653783 kubelet[2905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 13:10:56 default-k8s-diff-port-653783 kubelet[2905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 13:10:56 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:56.805956    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145056805703666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:56 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:10:56.805979    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145056805703666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:11:06 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:11:06.810307    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145066809407477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:11:06 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:11:06.810564    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145066809407477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:11:07 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:11:07.679192    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tcr8r" podUID="2f017719-26ad-44ca-a44a-e6c20cd6438c"
	
	
	==> storage-provisioner [2c76c8542ddb2ce9e524530c6ad79d85c36c37c5d4119710797242d78cea690e] <==
	I1202 13:02:03.891573       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 13:02:03.914890       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 13:02:03.916373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 13:02:03.927888       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 13:02:03.928228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-653783_584bcb35-8d00-4e7c-beee-83c26aae3904!
	I1202 13:02:03.932614       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0476abe9-2cc6-4fcf-b524-3c8b10aeda4c", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-653783_584bcb35-8d00-4e7c-beee-83c26aae3904 became leader
	I1202 13:02:04.029476       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-653783_584bcb35-8d00-4e7c-beee-83c26aae3904!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-653783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tcr8r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-653783 describe pod metrics-server-6867b74b74-tcr8r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-653783 describe pod metrics-server-6867b74b74-tcr8r: exit status 1 (71.207952ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tcr8r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-653783 describe pod metrics-server-6867b74b74-tcr8r: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (501.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-953044 -n embed-certs-953044
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-02 13:11:35.294201951 +0000 UTC m=+6074.125986962
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-953044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-953044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.751µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-953044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-953044 -n embed-certs-953044
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-953044 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-953044 logs -n 25: (1.178204642s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo cat                           | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo cat                           | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo cat                           | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo docker                        | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo cat                           | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo cat                           | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo cat                           | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo cat                           | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo                               | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo find                          | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-256954 sudo crio                          | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p kindnet-256954                                    | kindnet-256954        | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC | 02 Dec 24 13:11 UTC |
	| start   | -p custom-flannel-256954                             | custom-flannel-256954 | jenkins | v1.34.0 | 02 Dec 24 13:11 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 13:11:34
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 13:11:34.617415   69810 out.go:345] Setting OutFile to fd 1 ...
	I1202 13:11:34.617511   69810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 13:11:34.617522   69810 out.go:358] Setting ErrFile to fd 2...
	I1202 13:11:34.617528   69810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 13:11:34.617757   69810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 13:11:34.618310   69810 out.go:352] Setting JSON to false
	I1202 13:11:34.619286   69810 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6847,"bootTime":1733138248,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 13:11:34.619375   69810 start.go:139] virtualization: kvm guest
	I1202 13:11:34.621414   69810 out.go:177] * [custom-flannel-256954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 13:11:34.622710   69810 notify.go:220] Checking for updates...
	I1202 13:11:34.622737   69810 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 13:11:34.623995   69810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 13:11:34.625225   69810 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 13:11:34.626355   69810 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 13:11:34.627524   69810 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 13:11:34.628773   69810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 13:11:34.630385   69810 config.go:182] Loaded profile config "calico-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:11:34.630489   69810 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:11:34.630564   69810 config.go:182] Loaded profile config "embed-certs-953044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:11:34.630635   69810 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 13:11:34.665974   69810 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 13:11:34.667132   69810 start.go:297] selected driver: kvm2
	I1202 13:11:34.667144   69810 start.go:901] validating driver "kvm2" against <nil>
	I1202 13:11:34.667164   69810 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 13:11:34.667828   69810 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 13:11:34.667901   69810 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 13:11:34.682628   69810 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 13:11:34.682662   69810 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 13:11:34.682863   69810 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 13:11:34.682888   69810 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1202 13:11:34.682896   69810 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1202 13:11:34.682936   69810 start.go:340] cluster config:
	{Name:custom-flannel-256954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 13:11:34.683039   69810 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 13:11:34.684611   69810 out.go:177] * Starting "custom-flannel-256954" primary control-plane node in "custom-flannel-256954" cluster
	
	
	==> CRI-O <==
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.904157455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=495f77fc-205f-41c4-a927-36c8be00236f name=/runtime.v1.RuntimeService/Version
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.905713650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=383ccf10-898b-4d3a-94ee-380e31c69ef7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.906245816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145095906220802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=383ccf10-898b-4d3a-94ee-380e31c69ef7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.907476084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f812ffd-4d82-4c78-82e0-b7f27a69533b name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.907575003Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f812ffd-4d82-4c78-82e0-b7f27a69533b name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.911478392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d08817fb6c3d9679814ecf7ff46ed4f764eafae34fd476a22968648a2a9f5043,PodSandboxId:48fd491b98c0568aa6b75d40298a390a2d43ecef5297e4d30d33dcfb851af493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045435084166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tm4ct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109d2f58-c2c8-4bf0-8232-fdbeb078305d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f046b95d54fed68d1b3f71d59321aaf977618d50bb50fed2927d328f30e84e24,PodSandboxId:a862c8481f4a3d9c26824413093fe931339d929799e46310f650a607514e0739,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045458111475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fwt6z,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06a23976-b261-4baa-8f66-e966addfb41a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7cadda79ea1420beed17bbabb5dfba27858bc0cc5540f25c96cda3d611fe8b,PodSandboxId:438fcd4ea84408bce624479349b08f5321a6833156a244da2fc211ed75379d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1733144043927439884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fdd473-75b2-41d6-95bf-1bcab189dae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb21e7d976f8638f1e2c1a42e5086c5be15812bd6c97067fdba75d9dd749c93,PodSandboxId:2f82a659a255ec373a7a17d8ccce5c55d9f938993185d172afe6be1c2879ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733144043529857677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kg4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6b74e9c-47e4-4b1c-a219-685cc119219b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbeab4124925b0160dd7aa651dcfded39bfa15ec4aee177a9786ca009986112,PodSandboxId:13a76fba8eeb5e1fa84e7b28abb201e640193fb570339cb2608ee55da8c04543,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144032700295498,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52f86423e6fd1bda098a1bcfd3df2272,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc603f56c0eda934cbb56ba9ae62f3cc2beba1efab239b32c2be6ac920ca3bc5,PodSandboxId:1c95b5e0c51835b15a5afc489880a759dc39ec2f8cf417bdb8ff59d06c2cb6cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733144032698169445,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e1be5264f0f225f54bf06a3e08f300,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b691ba9ee672eb0217cd4de27f6401268a28aaf1eb9ac64e2865ca6e721695ad,PodSandboxId:c7fcc106b9666b83488f635c1d6f5266dac2aa11a57544ab67adf0361c664e6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144032674791962,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1fddd0b99936e33b9c983dce9c24768d5179373f466b4c1c42526db19d4824,PodSandboxId:278a05742631cb6ed533592bb0afd6fc9140a4ab6556818eea667165bde48fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144032630564404,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 722b8ca126b547dea166a1be58f44cfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57472bf615dac95d586758dd345e63ded56aa2208341a6fdaecd827829db8db5,PodSandboxId:ae6892854beae8cb5c98933cf0119c37a3b3b0aef596779a18bc1a3bdc819b86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733143750864052824,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f812ffd-4d82-4c78-82e0-b7f27a69533b name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.938642274Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bc1ad6c-eb76-4882-aa21-03b15cf6cdbe name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.938858579Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a862c8481f4a3d9c26824413093fe931339d929799e46310f650a607514e0739,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-fwt6z,Uid:06a23976-b261-4baa-8f66-e966addfb41a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144045137911504,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-fwt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06a23976-b261-4baa-8f66-e966addfb41a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T12:54:03.312106508Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:48fd491b98c0568aa6b75d40298a390a2d43ecef5297e4d30d33dcfb851af493,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-tm4ct,Uid:109d2f58-c2c8-4bf0-8232-fdbeb078305d,Namespace:kube-s
ystem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144045100435822,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-tm4ct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109d2f58-c2c8-4bf0-8232-fdbeb078305d,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T12:54:03.286163006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:114a24ea3e25455c2cb53e24c8363167a88fde7f190af5eeec85cadb07975bf6,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-fwhvq,Uid:e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144043998336698,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-fwhvq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:m
ap[string]string{kubernetes.io/config.seen: 2024-12-02T12:54:03.680516996Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:438fcd4ea84408bce624479349b08f5321a6833156a244da2fc211ed75379d4c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:35fdd473-75b2-41d6-95bf-1bcab189dae5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144043809747839,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fdd473-75b2-41d6-95bf-1bcab189dae5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":
[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-02T12:54:03.503581220Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f82a659a255ec373a7a17d8ccce5c55d9f938993185d172afe6be1c2879ea50,Metadata:&PodSandboxMetadata{Name:kube-proxy-kg4z6,Uid:c6b74e9c-47e4-4b1c-a219-685cc119219b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144043259106728,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kg4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6b74e9c-47e4-4b1c-a219-685cc119219b,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T12:54:02.942655057Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:13a76fba8eeb5e1fa84e7b28abb201e640193fb570339cb2608ee55da8c04543,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-953044,Uid:52f86423e6fd1bda098a1bcfd3df2272,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144032471618250,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52f86423e6fd1bda098a1bcfd3df2272,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.203:2379,kubernetes.io/config.hash: 52f86423e6fd1bda098a1bcfd3df2272,kubernetes.io/config.seen: 2024-12-02T12:53:52.021826461Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1c95b5e0c51835b15a5afc489880a759dc39ec2f8cf417bdb8ff59d06c2cb6cd,
Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-953044,Uid:41e1be5264f0f225f54bf06a3e08f300,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144032461495393,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e1be5264f0f225f54bf06a3e08f300,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 41e1be5264f0f225f54bf06a3e08f300,kubernetes.io/config.seen: 2024-12-02T12:53:52.021825666Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:278a05742631cb6ed533592bb0afd6fc9140a4ab6556818eea667165bde48fb9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-953044,Uid:722b8ca126b547dea166a1be58f44cfa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733144032456391336,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD
,io.kubernetes.pod.name: kube-controller-manager-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 722b8ca126b547dea166a1be58f44cfa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 722b8ca126b547dea166a1be58f44cfa,kubernetes.io/config.seen: 2024-12-02T12:53:52.021824747Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c7fcc106b9666b83488f635c1d6f5266dac2aa11a57544ab67adf0361c664e6d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-953044,Uid:7423d0058a9f4bff5e4eacc5ef592b3d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733144032454771144,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.7
2.203:8443,kubernetes.io/config.hash: 7423d0058a9f4bff5e4eacc5ef592b3d,kubernetes.io/config.seen: 2024-12-02T12:53:52.021821342Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2bc1ad6c-eb76-4882-aa21-03b15cf6cdbe name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.939559931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2ea808d-5f7e-46d8-9a33-09a3736805e9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.939629318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2ea808d-5f7e-46d8-9a33-09a3736805e9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.939817572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d08817fb6c3d9679814ecf7ff46ed4f764eafae34fd476a22968648a2a9f5043,PodSandboxId:48fd491b98c0568aa6b75d40298a390a2d43ecef5297e4d30d33dcfb851af493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045435084166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tm4ct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109d2f58-c2c8-4bf0-8232-fdbeb078305d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f046b95d54fed68d1b3f71d59321aaf977618d50bb50fed2927d328f30e84e24,PodSandboxId:a862c8481f4a3d9c26824413093fe931339d929799e46310f650a607514e0739,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045458111475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fwt6z,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06a23976-b261-4baa-8f66-e966addfb41a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7cadda79ea1420beed17bbabb5dfba27858bc0cc5540f25c96cda3d611fe8b,PodSandboxId:438fcd4ea84408bce624479349b08f5321a6833156a244da2fc211ed75379d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1733144043927439884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fdd473-75b2-41d6-95bf-1bcab189dae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb21e7d976f8638f1e2c1a42e5086c5be15812bd6c97067fdba75d9dd749c93,PodSandboxId:2f82a659a255ec373a7a17d8ccce5c55d9f938993185d172afe6be1c2879ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733144043529857677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kg4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6b74e9c-47e4-4b1c-a219-685cc119219b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbeab4124925b0160dd7aa651dcfded39bfa15ec4aee177a9786ca009986112,PodSandboxId:13a76fba8eeb5e1fa84e7b28abb201e640193fb570339cb2608ee55da8c04543,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144032700295498,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52f86423e6fd1bda098a1bcfd3df2272,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc603f56c0eda934cbb56ba9ae62f3cc2beba1efab239b32c2be6ac920ca3bc5,PodSandboxId:1c95b5e0c51835b15a5afc489880a759dc39ec2f8cf417bdb8ff59d06c2cb6cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733144032698169445,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e1be5264f0f225f54bf06a3e08f300,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b691ba9ee672eb0217cd4de27f6401268a28aaf1eb9ac64e2865ca6e721695ad,PodSandboxId:c7fcc106b9666b83488f635c1d6f5266dac2aa11a57544ab67adf0361c664e6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144032674791962,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1fddd0b99936e33b9c983dce9c24768d5179373f466b4c1c42526db19d4824,PodSandboxId:278a05742631cb6ed533592bb0afd6fc9140a4ab6556818eea667165bde48fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144032630564404,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 722b8ca126b547dea166a1be58f44cfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2ea808d-5f7e-46d8-9a33-09a3736805e9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.952649296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e1d9f9d-695f-4632-ba74-c5beb93edd58 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.952720253Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e1d9f9d-695f-4632-ba74-c5beb93edd58 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.953809982Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1970dc22-1ba8-4bf7-afc8-bad562da9651 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.954299717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145095954281695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1970dc22-1ba8-4bf7-afc8-bad562da9651 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.954732727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbfeb7cc-612a-4b4c-9c01-8b5c1522821f name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.954782915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbfeb7cc-612a-4b4c-9c01-8b5c1522821f name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.955019043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d08817fb6c3d9679814ecf7ff46ed4f764eafae34fd476a22968648a2a9f5043,PodSandboxId:48fd491b98c0568aa6b75d40298a390a2d43ecef5297e4d30d33dcfb851af493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045435084166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tm4ct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109d2f58-c2c8-4bf0-8232-fdbeb078305d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f046b95d54fed68d1b3f71d59321aaf977618d50bb50fed2927d328f30e84e24,PodSandboxId:a862c8481f4a3d9c26824413093fe931339d929799e46310f650a607514e0739,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045458111475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fwt6z,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06a23976-b261-4baa-8f66-e966addfb41a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7cadda79ea1420beed17bbabb5dfba27858bc0cc5540f25c96cda3d611fe8b,PodSandboxId:438fcd4ea84408bce624479349b08f5321a6833156a244da2fc211ed75379d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1733144043927439884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fdd473-75b2-41d6-95bf-1bcab189dae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb21e7d976f8638f1e2c1a42e5086c5be15812bd6c97067fdba75d9dd749c93,PodSandboxId:2f82a659a255ec373a7a17d8ccce5c55d9f938993185d172afe6be1c2879ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733144043529857677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kg4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6b74e9c-47e4-4b1c-a219-685cc119219b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbeab4124925b0160dd7aa651dcfded39bfa15ec4aee177a9786ca009986112,PodSandboxId:13a76fba8eeb5e1fa84e7b28abb201e640193fb570339cb2608ee55da8c04543,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144032700295498,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52f86423e6fd1bda098a1bcfd3df2272,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc603f56c0eda934cbb56ba9ae62f3cc2beba1efab239b32c2be6ac920ca3bc5,PodSandboxId:1c95b5e0c51835b15a5afc489880a759dc39ec2f8cf417bdb8ff59d06c2cb6cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733144032698169445,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e1be5264f0f225f54bf06a3e08f300,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b691ba9ee672eb0217cd4de27f6401268a28aaf1eb9ac64e2865ca6e721695ad,PodSandboxId:c7fcc106b9666b83488f635c1d6f5266dac2aa11a57544ab67adf0361c664e6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144032674791962,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1fddd0b99936e33b9c983dce9c24768d5179373f466b4c1c42526db19d4824,PodSandboxId:278a05742631cb6ed533592bb0afd6fc9140a4ab6556818eea667165bde48fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144032630564404,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 722b8ca126b547dea166a1be58f44cfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57472bf615dac95d586758dd345e63ded56aa2208341a6fdaecd827829db8db5,PodSandboxId:ae6892854beae8cb5c98933cf0119c37a3b3b0aef596779a18bc1a3bdc819b86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733143750864052824,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbfeb7cc-612a-4b4c-9c01-8b5c1522821f name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.991668914Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d52d2f94-7b9b-4dd0-b07c-f7c2702b157c name=/runtime.v1.RuntimeService/Version
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.991768669Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d52d2f94-7b9b-4dd0-b07c-f7c2702b157c name=/runtime.v1.RuntimeService/Version
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.992916872Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40b64c01-47c8-455a-8897-8aac255bc1e9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.993526992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145095993500410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40b64c01-47c8-455a-8897-8aac255bc1e9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.994397413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd3780f1-2c63-45d9-b9aa-447ef2ec74c1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.994464070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd3780f1-2c63-45d9-b9aa-447ef2ec74c1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:11:35 embed-certs-953044 crio[710]: time="2024-12-02 13:11:35.994721607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d08817fb6c3d9679814ecf7ff46ed4f764eafae34fd476a22968648a2a9f5043,PodSandboxId:48fd491b98c0568aa6b75d40298a390a2d43ecef5297e4d30d33dcfb851af493,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045435084166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tm4ct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 109d2f58-c2c8-4bf0-8232-fdbeb078305d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f046b95d54fed68d1b3f71d59321aaf977618d50bb50fed2927d328f30e84e24,PodSandboxId:a862c8481f4a3d9c26824413093fe931339d929799e46310f650a607514e0739,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144045458111475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fwt6z,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 06a23976-b261-4baa-8f66-e966addfb41a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7cadda79ea1420beed17bbabb5dfba27858bc0cc5540f25c96cda3d611fe8b,PodSandboxId:438fcd4ea84408bce624479349b08f5321a6833156a244da2fc211ed75379d4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1733144043927439884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fdd473-75b2-41d6-95bf-1bcab189dae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cb21e7d976f8638f1e2c1a42e5086c5be15812bd6c97067fdba75d9dd749c93,PodSandboxId:2f82a659a255ec373a7a17d8ccce5c55d9f938993185d172afe6be1c2879ea50,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733144043529857677,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kg4z6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6b74e9c-47e4-4b1c-a219-685cc119219b,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbeab4124925b0160dd7aa651dcfded39bfa15ec4aee177a9786ca009986112,PodSandboxId:13a76fba8eeb5e1fa84e7b28abb201e640193fb570339cb2608ee55da8c04543,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144032700295498,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52f86423e6fd1bda098a1bcfd3df2272,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc603f56c0eda934cbb56ba9ae62f3cc2beba1efab239b32c2be6ac920ca3bc5,PodSandboxId:1c95b5e0c51835b15a5afc489880a759dc39ec2f8cf417bdb8ff59d06c2cb6cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733144032698169445,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e1be5264f0f225f54bf06a3e08f300,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b691ba9ee672eb0217cd4de27f6401268a28aaf1eb9ac64e2865ca6e721695ad,PodSandboxId:c7fcc106b9666b83488f635c1d6f5266dac2aa11a57544ab67adf0361c664e6d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144032674791962,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae1fddd0b99936e33b9c983dce9c24768d5179373f466b4c1c42526db19d4824,PodSandboxId:278a05742631cb6ed533592bb0afd6fc9140a4ab6556818eea667165bde48fb9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144032630564404,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 722b8ca126b547dea166a1be58f44cfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57472bf615dac95d586758dd345e63ded56aa2208341a6fdaecd827829db8db5,PodSandboxId:ae6892854beae8cb5c98933cf0119c37a3b3b0aef596779a18bc1a3bdc819b86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733143750864052824,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-953044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7423d0058a9f4bff5e4eacc5ef592b3d,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd3780f1-2c63-45d9-b9aa-447ef2ec74c1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f046b95d54fed       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   a862c8481f4a3       coredns-7c65d6cfc9-fwt6z
	d08817fb6c3d9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   48fd491b98c05       coredns-7c65d6cfc9-tm4ct
	0b7cadda79ea1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   438fcd4ea8440       storage-provisioner
	0cb21e7d976f8       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   17 minutes ago      Running             kube-proxy                0                   2f82a659a255e       kube-proxy-kg4z6
	1cbeab4124925       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   13a76fba8eeb5       etcd-embed-certs-953044
	cc603f56c0eda       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   17 minutes ago      Running             kube-scheduler            2                   1c95b5e0c5183       kube-scheduler-embed-certs-953044
	b691ba9ee672e       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   17 minutes ago      Running             kube-apiserver            2                   c7fcc106b9666       kube-apiserver-embed-certs-953044
	ae1fddd0b9993       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   17 minutes ago      Running             kube-controller-manager   2                   278a05742631c       kube-controller-manager-embed-certs-953044
	57472bf615dac       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   22 minutes ago      Exited              kube-apiserver            1                   ae6892854beae       kube-apiserver-embed-certs-953044
	
	
	==> coredns [d08817fb6c3d9679814ecf7ff46ed4f764eafae34fd476a22968648a2a9f5043] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f046b95d54fed68d1b3f71d59321aaf977618d50bb50fed2927d328f30e84e24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-953044
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-953044
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=embed-certs-953044
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T12_53_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 12:53:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-953044
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 13:11:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 13:09:27 +0000   Mon, 02 Dec 2024 12:53:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 13:09:27 +0000   Mon, 02 Dec 2024 12:53:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 13:09:27 +0000   Mon, 02 Dec 2024 12:53:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 13:09:27 +0000   Mon, 02 Dec 2024 12:53:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.203
	  Hostname:    embed-certs-953044
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df6cc10471794eaba57ca35f6b869cf8
	  System UUID:                df6cc104-7179-4eab-a57c-a35f6b869cf8
	  Boot ID:                    19542c91-0491-4a31-9489-18c0c582728d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-fwt6z                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-7c65d6cfc9-tm4ct                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-embed-certs-953044                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-embed-certs-953044             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-embed-certs-953044    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-kg4z6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-embed-certs-953044             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-fwhvq               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node embed-certs-953044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node embed-certs-953044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node embed-certs-953044 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node embed-certs-953044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node embed-certs-953044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node embed-certs-953044 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node embed-certs-953044 event: Registered Node embed-certs-953044 in Controller
	
	
	==> dmesg <==
	[  +0.039923] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.041424] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.780129] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.632035] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 2 12:49] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.064141] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080791] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.209584] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.169534] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.327642] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[  +4.462982] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +0.065614] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.961439] systemd-fstab-generator[911]: Ignoring "noauto" option for root device
	[  +5.665421] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.267782] kauditd_printk_skb: 54 callbacks suppressed
	[ +23.609016] kauditd_printk_skb: 31 callbacks suppressed
	[Dec 2 12:53] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.242082] systemd-fstab-generator[2649]: Ignoring "noauto" option for root device
	[  +4.498456] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.553578] systemd-fstab-generator[2972]: Ignoring "noauto" option for root device
	[Dec 2 12:54] systemd-fstab-generator[3101]: Ignoring "noauto" option for root device
	[  +0.083707] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.776886] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [1cbeab4124925b0160dd7aa651dcfded39bfa15ec4aee177a9786ca009986112] <==
	{"level":"info","ts":"2024-12-02T12:53:53.976139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-02T12:53:53.976190Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a received MsgPreVoteResp from fd1c782511c6d1a at term 1"}
	{"level":"info","ts":"2024-12-02T12:53:53.976223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a became candidate at term 2"}
	{"level":"info","ts":"2024-12-02T12:53:53.976247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a received MsgVoteResp from fd1c782511c6d1a at term 2"}
	{"level":"info","ts":"2024-12-02T12:53:53.976274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fd1c782511c6d1a became leader at term 2"}
	{"level":"info","ts":"2024-12-02T12:53:53.976299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fd1c782511c6d1a elected leader fd1c782511c6d1a at term 2"}
	{"level":"info","ts":"2024-12-02T12:53:53.980235Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:53:53.981220Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"fd1c782511c6d1a","local-member-attributes":"{Name:embed-certs-953044 ClientURLs:[https://192.168.72.203:2379]}","request-path":"/0/members/fd1c782511c6d1a/attributes","cluster-id":"e420fb3f9edbaec1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-02T12:53:53.981680Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e420fb3f9edbaec1","local-member-id":"fd1c782511c6d1a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:53:53.981769Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:53:53.981805Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:53:53.981815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T12:53:53.981782Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T12:53:53.982821Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-02T12:53:53.983585Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-02T12:53:53.983655Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-02T12:53:53.983686Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-02T12:53:53.984467Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-02T12:53:53.988718Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.203:2379"}
	{"level":"info","ts":"2024-12-02T13:03:54.009750Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":721}
	{"level":"info","ts":"2024-12-02T13:03:54.018608Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":721,"took":"8.243051ms","hash":2593340555,"current-db-size-bytes":2297856,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2297856,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-12-02T13:03:54.018676Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2593340555,"revision":721,"compact-revision":-1}
	{"level":"info","ts":"2024-12-02T13:08:54.016093Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":963}
	{"level":"info","ts":"2024-12-02T13:08:54.019651Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":963,"took":"2.79802ms","hash":2573699534,"current-db-size-bytes":2297856,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-02T13:08:54.019719Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2573699534,"revision":963,"compact-revision":721}
	
	
	==> kernel <==
	 13:11:36 up 22 min,  0 users,  load average: 0.04, 0.10, 0.10
	Linux embed-certs-953044 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [57472bf615dac95d586758dd345e63ded56aa2208341a6fdaecd827829db8db5] <==
	W1202 12:53:49.289814       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.297259       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.307105       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.314616       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.324305       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.378572       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.405174       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.409788       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.411098       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.437426       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.506217       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.526656       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.569265       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.582033       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.584310       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.650865       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.696493       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.712148       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.716560       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.776710       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.917391       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:49.972749       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:50.050420       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:50.069759       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 12:53:50.137282       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b691ba9ee672eb0217cd4de27f6401268a28aaf1eb9ac64e2865ca6e721695ad] <==
	I1202 13:06:56.416890       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:06:56.416942       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 13:08:55.415360       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:08:55.415572       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1202 13:08:56.417839       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:08:56.417881       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1202 13:08:56.418003       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:08:56.418092       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 13:08:56.419009       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:08:56.420168       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 13:09:56.419347       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:09:56.419459       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1202 13:09:56.420580       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 13:09:56.420698       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:09:56.420803       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 13:09:56.422687       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ae1fddd0b99936e33b9c983dce9c24768d5179373f466b4c1c42526db19d4824] <==
	E1202 13:06:32.468094       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:06:32.987786       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:07:02.473899       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:07:02.996336       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:07:32.482598       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:07:33.004357       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:08:02.488690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:08:03.011826       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:08:32.495418       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:08:33.021694       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:09:02.502650       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:09:03.031061       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:09:27.921550       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-953044"
	E1202 13:09:32.511685       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:09:33.040201       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:10:02.519038       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:10:03.048612       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:10:17.967213       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="622.897µs"
	I1202 13:10:29.972649       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="1.260355ms"
	E1202 13:10:32.526641       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:10:33.057653       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:11:02.533552       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:11:03.066471       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:11:32.541098       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:11:33.075618       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0cb21e7d976f8638f1e2c1a42e5086c5be15812bd6c97067fdba75d9dd749c93] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1202 12:54:03.875062       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1202 12:54:03.894940       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.203"]
	E1202 12:54:03.895285       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 12:54:03.979534       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1202 12:54:03.979562       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 12:54:03.979593       1 server_linux.go:169] "Using iptables Proxier"
	I1202 12:54:03.984208       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 12:54:03.984513       1 server.go:483] "Version info" version="v1.31.2"
	I1202 12:54:03.984524       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 12:54:03.985812       1 config.go:199] "Starting service config controller"
	I1202 12:54:03.985828       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 12:54:03.985848       1 config.go:105] "Starting endpoint slice config controller"
	I1202 12:54:03.985852       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 12:54:03.993138       1 config.go:328] "Starting node config controller"
	I1202 12:54:03.993153       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 12:54:04.090859       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 12:54:04.090938       1 shared_informer.go:320] Caches are synced for service config
	I1202 12:54:04.093185       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cc603f56c0eda934cbb56ba9ae62f3cc2beba1efab239b32c2be6ac920ca3bc5] <==
	W1202 12:53:55.437363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 12:53:55.437408       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:55.437521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 12:53:55.437554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:55.437601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1202 12:53:55.437638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.372083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1202 12:53:56.372221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.417187       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1202 12:53:56.417363       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.448615       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1202 12:53:56.448728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.457537       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1202 12:53:56.457640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.473425       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 12:53:56.473611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.502355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 12:53:56.502483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.556282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 12:53:56.556363       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.657597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1202 12:53:56.657755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 12:53:56.679630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 12:53:56.679707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1202 12:53:57.028741       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 13:10:18 embed-certs-953044 kubelet[2979]: E1202 13:10:18.161794    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145018161381813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:28 embed-certs-953044 kubelet[2979]: E1202 13:10:28.162869    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145028162525152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:28 embed-certs-953044 kubelet[2979]: E1202 13:10:28.163343    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145028162525152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:29 embed-certs-953044 kubelet[2979]: E1202 13:10:29.939891    2979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fwhvq" podUID="e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f"
	Dec 02 13:10:38 embed-certs-953044 kubelet[2979]: E1202 13:10:38.165352    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145038164557357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:38 embed-certs-953044 kubelet[2979]: E1202 13:10:38.165756    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145038164557357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:44 embed-certs-953044 kubelet[2979]: E1202 13:10:44.940367    2979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fwhvq" podUID="e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f"
	Dec 02 13:10:48 embed-certs-953044 kubelet[2979]: E1202 13:10:48.167697    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145048167217251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:48 embed-certs-953044 kubelet[2979]: E1202 13:10:48.168001    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145048167217251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:56 embed-certs-953044 kubelet[2979]: E1202 13:10:56.939419    2979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fwhvq" podUID="e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f"
	Dec 02 13:10:57 embed-certs-953044 kubelet[2979]: E1202 13:10:57.970760    2979 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 02 13:10:57 embed-certs-953044 kubelet[2979]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 13:10:57 embed-certs-953044 kubelet[2979]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 13:10:57 embed-certs-953044 kubelet[2979]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 13:10:57 embed-certs-953044 kubelet[2979]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 13:10:58 embed-certs-953044 kubelet[2979]: E1202 13:10:58.169874    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145058169411486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:10:58 embed-certs-953044 kubelet[2979]: E1202 13:10:58.170311    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145058169411486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:11:08 embed-certs-953044 kubelet[2979]: E1202 13:11:08.172286    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145068171824490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:11:08 embed-certs-953044 kubelet[2979]: E1202 13:11:08.172883    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145068171824490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:11:10 embed-certs-953044 kubelet[2979]: E1202 13:11:10.940334    2979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fwhvq" podUID="e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f"
	Dec 02 13:11:18 embed-certs-953044 kubelet[2979]: E1202 13:11:18.174334    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145078173668692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:11:18 embed-certs-953044 kubelet[2979]: E1202 13:11:18.174378    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145078173668692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:11:24 embed-certs-953044 kubelet[2979]: E1202 13:11:24.939470    2979 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fwhvq" podUID="e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f"
	Dec 02 13:11:28 embed-certs-953044 kubelet[2979]: E1202 13:11:28.176828    2979 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145088176431353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:11:28 embed-certs-953044 kubelet[2979]: E1202 13:11:28.177338    2979 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145088176431353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0b7cadda79ea1420beed17bbabb5dfba27858bc0cc5540f25c96cda3d611fe8b] <==
	I1202 12:54:04.066697       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 12:54:04.081000       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 12:54:04.081782       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 12:54:04.099234       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 12:54:04.099279       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1ca092b-593c-414f-b3fd-59e5dbde38d3", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-953044_b6a066d7-5c71-4881-9a27-88e4332c6dee became leader
	I1202 12:54:04.099379       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-953044_b6a066d7-5c71-4881-9a27-88e4332c6dee!
	I1202 12:54:04.200504       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-953044_b6a066d7-5c71-4881-9a27-88e4332c6dee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-953044 -n embed-certs-953044
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-953044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-fwhvq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-953044 describe pod metrics-server-6867b74b74-fwhvq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-953044 describe pod metrics-server-6867b74b74-fwhvq: exit status 1 (60.054601ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-fwhvq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-953044 describe pod metrics-server-6867b74b74-fwhvq: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (501.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (374s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-658679 -n no-preload-658679
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-02 13:09:47.919698892 +0000 UTC m=+5966.751483894
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-658679 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-658679 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.857µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-658679 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658679 -n no-preload-658679
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-658679 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-658679 logs -n 25: (1.164162532s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-983490             | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-983490                  | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-983490 --memory=2200 --alsologtostderr   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658679                  | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658679                                   | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-983490 image list                           | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	| delete  | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:49 UTC |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-666766        | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-953044                 | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666766             | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653783  | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC | 02 Dec 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC |                     |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653783       | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC | 02 Dec 24 13:02 UTC |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 13:09 UTC | 02 Dec 24 13:09 UTC |
	| start   | -p auto-256954 --memory=3072                           | auto-256954                  | jenkins | v1.34.0 | 02 Dec 24 13:09 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 13:09:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 13:09:30.329234   65727 out.go:345] Setting OutFile to fd 1 ...
	I1202 13:09:30.329342   65727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 13:09:30.329351   65727 out.go:358] Setting ErrFile to fd 2...
	I1202 13:09:30.329355   65727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 13:09:30.329560   65727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 13:09:30.330108   65727 out.go:352] Setting JSON to false
	I1202 13:09:30.331038   65727 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6722,"bootTime":1733138248,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 13:09:30.331126   65727 start.go:139] virtualization: kvm guest
	I1202 13:09:30.333959   65727 out.go:177] * [auto-256954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 13:09:30.335236   65727 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 13:09:30.335251   65727 notify.go:220] Checking for updates...
	I1202 13:09:30.337431   65727 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 13:09:30.338593   65727 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 13:09:30.339667   65727 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 13:09:30.340704   65727 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 13:09:30.341713   65727 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 13:09:30.343187   65727 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:09:30.343272   65727 config.go:182] Loaded profile config "embed-certs-953044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:09:30.343364   65727 config.go:182] Loaded profile config "no-preload-658679": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:09:30.343442   65727 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 13:09:30.379414   65727 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 13:09:30.380657   65727 start.go:297] selected driver: kvm2
	I1202 13:09:30.380670   65727 start.go:901] validating driver "kvm2" against <nil>
	I1202 13:09:30.380680   65727 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 13:09:30.381328   65727 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 13:09:30.381391   65727 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 13:09:30.396701   65727 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 13:09:30.396747   65727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 13:09:30.396962   65727 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 13:09:30.396989   65727 cni.go:84] Creating CNI manager for ""
	I1202 13:09:30.397025   65727 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 13:09:30.397033   65727 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1202 13:09:30.397073   65727 start.go:340] cluster config:
	{Name:auto-256954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 13:09:30.397172   65727 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 13:09:30.398834   65727 out.go:177] * Starting "auto-256954" primary control-plane node in "auto-256954" cluster
	I1202 13:09:30.399948   65727 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 13:09:30.399973   65727 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 13:09:30.399982   65727 cache.go:56] Caching tarball of preloaded images
	I1202 13:09:30.400074   65727 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 13:09:30.400088   65727 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 13:09:30.400163   65727 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/config.json ...
	I1202 13:09:30.400181   65727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/auto-256954/config.json: {Name:mk094b3fb9632570cc8d16c5413043b2096cbee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:09:30.400350   65727 start.go:360] acquireMachinesLock for auto-256954: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 13:09:30.400392   65727 start.go:364] duration metric: took 25.657µs to acquireMachinesLock for "auto-256954"
	I1202 13:09:30.400416   65727 start.go:93] Provisioning new machine with config: &{Name:auto-256954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.2 ClusterName:auto-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 13:09:30.400502   65727 start.go:125] createHost starting for "" (driver="kvm2")
	I1202 13:09:30.401961   65727 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1202 13:09:30.402092   65727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:09:30.402139   65727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:09:30.416592   65727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I1202 13:09:30.417114   65727 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:09:30.417773   65727 main.go:141] libmachine: Using API Version  1
	I1202 13:09:30.417797   65727 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:09:30.418168   65727 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:09:30.418401   65727 main.go:141] libmachine: (auto-256954) Calling .GetMachineName
	I1202 13:09:30.418550   65727 main.go:141] libmachine: (auto-256954) Calling .DriverName
	I1202 13:09:30.418721   65727 start.go:159] libmachine.API.Create for "auto-256954" (driver="kvm2")
	I1202 13:09:30.418772   65727 client.go:168] LocalClient.Create starting
	I1202 13:09:30.418811   65727 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 13:09:30.418847   65727 main.go:141] libmachine: Decoding PEM data...
	I1202 13:09:30.418864   65727 main.go:141] libmachine: Parsing certificate...
	I1202 13:09:30.418923   65727 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 13:09:30.418957   65727 main.go:141] libmachine: Decoding PEM data...
	I1202 13:09:30.418997   65727 main.go:141] libmachine: Parsing certificate...
	I1202 13:09:30.419024   65727 main.go:141] libmachine: Running pre-create checks...
	I1202 13:09:30.419086   65727 main.go:141] libmachine: (auto-256954) Calling .PreCreateCheck
	I1202 13:09:30.419424   65727 main.go:141] libmachine: (auto-256954) Calling .GetConfigRaw
	I1202 13:09:30.419751   65727 main.go:141] libmachine: Creating machine...
	I1202 13:09:30.419765   65727 main.go:141] libmachine: (auto-256954) Calling .Create
	I1202 13:09:30.419875   65727 main.go:141] libmachine: (auto-256954) Creating KVM machine...
	I1202 13:09:30.421128   65727 main.go:141] libmachine: (auto-256954) DBG | found existing default KVM network
	I1202 13:09:30.422245   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:30.422114   65749 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:4f:fe} reservation:<nil>}
	I1202 13:09:30.423300   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:30.423213   65749 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000306a30}
	I1202 13:09:30.423322   65727 main.go:141] libmachine: (auto-256954) DBG | created network xml: 
	I1202 13:09:30.423329   65727 main.go:141] libmachine: (auto-256954) DBG | <network>
	I1202 13:09:30.423337   65727 main.go:141] libmachine: (auto-256954) DBG |   <name>mk-auto-256954</name>
	I1202 13:09:30.423344   65727 main.go:141] libmachine: (auto-256954) DBG |   <dns enable='no'/>
	I1202 13:09:30.423355   65727 main.go:141] libmachine: (auto-256954) DBG |   
	I1202 13:09:30.423364   65727 main.go:141] libmachine: (auto-256954) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1202 13:09:30.423370   65727 main.go:141] libmachine: (auto-256954) DBG |     <dhcp>
	I1202 13:09:30.423376   65727 main.go:141] libmachine: (auto-256954) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1202 13:09:30.423385   65727 main.go:141] libmachine: (auto-256954) DBG |     </dhcp>
	I1202 13:09:30.423393   65727 main.go:141] libmachine: (auto-256954) DBG |   </ip>
	I1202 13:09:30.423399   65727 main.go:141] libmachine: (auto-256954) DBG |   
	I1202 13:09:30.423404   65727 main.go:141] libmachine: (auto-256954) DBG | </network>
	I1202 13:09:30.423408   65727 main.go:141] libmachine: (auto-256954) DBG | 
	I1202 13:09:30.428334   65727 main.go:141] libmachine: (auto-256954) DBG | trying to create private KVM network mk-auto-256954 192.168.50.0/24...
	I1202 13:09:30.501387   65727 main.go:141] libmachine: (auto-256954) DBG | private KVM network mk-auto-256954 192.168.50.0/24 created
	I1202 13:09:30.501439   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:30.501365   65749 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 13:09:30.501451   65727 main.go:141] libmachine: (auto-256954) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954 ...
	I1202 13:09:30.501481   65727 main.go:141] libmachine: (auto-256954) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 13:09:30.501565   65727 main.go:141] libmachine: (auto-256954) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 13:09:30.777027   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:30.776872   65749 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954/id_rsa...
	I1202 13:09:30.841077   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:30.840973   65749 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954/auto-256954.rawdisk...
	I1202 13:09:30.841108   65727 main.go:141] libmachine: (auto-256954) DBG | Writing magic tar header
	I1202 13:09:30.841129   65727 main.go:141] libmachine: (auto-256954) DBG | Writing SSH key tar header
	I1202 13:09:30.841140   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:30.841108   65749 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954 ...
	I1202 13:09:30.841282   65727 main.go:141] libmachine: (auto-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954
	I1202 13:09:30.841315   65727 main.go:141] libmachine: (auto-256954) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954 (perms=drwx------)
	I1202 13:09:30.841341   65727 main.go:141] libmachine: (auto-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 13:09:30.841352   65727 main.go:141] libmachine: (auto-256954) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 13:09:30.841370   65727 main.go:141] libmachine: (auto-256954) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 13:09:30.841382   65727 main.go:141] libmachine: (auto-256954) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 13:09:30.841396   65727 main.go:141] libmachine: (auto-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 13:09:30.841407   65727 main.go:141] libmachine: (auto-256954) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 13:09:30.841424   65727 main.go:141] libmachine: (auto-256954) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 13:09:30.841438   65727 main.go:141] libmachine: (auto-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 13:09:30.841453   65727 main.go:141] libmachine: (auto-256954) Creating domain...
	I1202 13:09:30.841470   65727 main.go:141] libmachine: (auto-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 13:09:30.841481   65727 main.go:141] libmachine: (auto-256954) DBG | Checking permissions on dir: /home/jenkins
	I1202 13:09:30.841492   65727 main.go:141] libmachine: (auto-256954) DBG | Checking permissions on dir: /home
	I1202 13:09:30.841504   65727 main.go:141] libmachine: (auto-256954) DBG | Skipping /home - not owner
	I1202 13:09:30.842575   65727 main.go:141] libmachine: (auto-256954) define libvirt domain using xml: 
	I1202 13:09:30.842598   65727 main.go:141] libmachine: (auto-256954) <domain type='kvm'>
	I1202 13:09:30.842608   65727 main.go:141] libmachine: (auto-256954)   <name>auto-256954</name>
	I1202 13:09:30.842621   65727 main.go:141] libmachine: (auto-256954)   <memory unit='MiB'>3072</memory>
	I1202 13:09:30.842631   65727 main.go:141] libmachine: (auto-256954)   <vcpu>2</vcpu>
	I1202 13:09:30.842639   65727 main.go:141] libmachine: (auto-256954)   <features>
	I1202 13:09:30.842646   65727 main.go:141] libmachine: (auto-256954)     <acpi/>
	I1202 13:09:30.842655   65727 main.go:141] libmachine: (auto-256954)     <apic/>
	I1202 13:09:30.842663   65727 main.go:141] libmachine: (auto-256954)     <pae/>
	I1202 13:09:30.842669   65727 main.go:141] libmachine: (auto-256954)     
	I1202 13:09:30.842676   65727 main.go:141] libmachine: (auto-256954)   </features>
	I1202 13:09:30.842685   65727 main.go:141] libmachine: (auto-256954)   <cpu mode='host-passthrough'>
	I1202 13:09:30.842691   65727 main.go:141] libmachine: (auto-256954)   
	I1202 13:09:30.842705   65727 main.go:141] libmachine: (auto-256954)   </cpu>
	I1202 13:09:30.842715   65727 main.go:141] libmachine: (auto-256954)   <os>
	I1202 13:09:30.842723   65727 main.go:141] libmachine: (auto-256954)     <type>hvm</type>
	I1202 13:09:30.842751   65727 main.go:141] libmachine: (auto-256954)     <boot dev='cdrom'/>
	I1202 13:09:30.842766   65727 main.go:141] libmachine: (auto-256954)     <boot dev='hd'/>
	I1202 13:09:30.842772   65727 main.go:141] libmachine: (auto-256954)     <bootmenu enable='no'/>
	I1202 13:09:30.842779   65727 main.go:141] libmachine: (auto-256954)   </os>
	I1202 13:09:30.842785   65727 main.go:141] libmachine: (auto-256954)   <devices>
	I1202 13:09:30.842793   65727 main.go:141] libmachine: (auto-256954)     <disk type='file' device='cdrom'>
	I1202 13:09:30.842805   65727 main.go:141] libmachine: (auto-256954)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954/boot2docker.iso'/>
	I1202 13:09:30.842813   65727 main.go:141] libmachine: (auto-256954)       <target dev='hdc' bus='scsi'/>
	I1202 13:09:30.842820   65727 main.go:141] libmachine: (auto-256954)       <readonly/>
	I1202 13:09:30.842825   65727 main.go:141] libmachine: (auto-256954)     </disk>
	I1202 13:09:30.842837   65727 main.go:141] libmachine: (auto-256954)     <disk type='file' device='disk'>
	I1202 13:09:30.842849   65727 main.go:141] libmachine: (auto-256954)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 13:09:30.842862   65727 main.go:141] libmachine: (auto-256954)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/auto-256954/auto-256954.rawdisk'/>
	I1202 13:09:30.842870   65727 main.go:141] libmachine: (auto-256954)       <target dev='hda' bus='virtio'/>
	I1202 13:09:30.842875   65727 main.go:141] libmachine: (auto-256954)     </disk>
	I1202 13:09:30.842886   65727 main.go:141] libmachine: (auto-256954)     <interface type='network'>
	I1202 13:09:30.842892   65727 main.go:141] libmachine: (auto-256954)       <source network='mk-auto-256954'/>
	I1202 13:09:30.842896   65727 main.go:141] libmachine: (auto-256954)       <model type='virtio'/>
	I1202 13:09:30.842900   65727 main.go:141] libmachine: (auto-256954)     </interface>
	I1202 13:09:30.842905   65727 main.go:141] libmachine: (auto-256954)     <interface type='network'>
	I1202 13:09:30.842910   65727 main.go:141] libmachine: (auto-256954)       <source network='default'/>
	I1202 13:09:30.842922   65727 main.go:141] libmachine: (auto-256954)       <model type='virtio'/>
	I1202 13:09:30.842929   65727 main.go:141] libmachine: (auto-256954)     </interface>
	I1202 13:09:30.842940   65727 main.go:141] libmachine: (auto-256954)     <serial type='pty'>
	I1202 13:09:30.842947   65727 main.go:141] libmachine: (auto-256954)       <target port='0'/>
	I1202 13:09:30.842951   65727 main.go:141] libmachine: (auto-256954)     </serial>
	I1202 13:09:30.842957   65727 main.go:141] libmachine: (auto-256954)     <console type='pty'>
	I1202 13:09:30.842961   65727 main.go:141] libmachine: (auto-256954)       <target type='serial' port='0'/>
	I1202 13:09:30.842968   65727 main.go:141] libmachine: (auto-256954)     </console>
	I1202 13:09:30.842978   65727 main.go:141] libmachine: (auto-256954)     <rng model='virtio'>
	I1202 13:09:30.842990   65727 main.go:141] libmachine: (auto-256954)       <backend model='random'>/dev/random</backend>
	I1202 13:09:30.842996   65727 main.go:141] libmachine: (auto-256954)     </rng>
	I1202 13:09:30.843011   65727 main.go:141] libmachine: (auto-256954)     
	I1202 13:09:30.843020   65727 main.go:141] libmachine: (auto-256954)     
	I1202 13:09:30.843031   65727 main.go:141] libmachine: (auto-256954)   </devices>
	I1202 13:09:30.843040   65727 main.go:141] libmachine: (auto-256954) </domain>
	I1202 13:09:30.843049   65727 main.go:141] libmachine: (auto-256954) 
	I1202 13:09:30.847568   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:d4:f3:84 in network default
	I1202 13:09:30.848209   65727 main.go:141] libmachine: (auto-256954) Ensuring networks are active...
	I1202 13:09:30.848251   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:30.849016   65727 main.go:141] libmachine: (auto-256954) Ensuring network default is active
	I1202 13:09:30.849267   65727 main.go:141] libmachine: (auto-256954) Ensuring network mk-auto-256954 is active
	I1202 13:09:30.849889   65727 main.go:141] libmachine: (auto-256954) Getting domain xml...
	I1202 13:09:30.850601   65727 main.go:141] libmachine: (auto-256954) Creating domain...
	I1202 13:09:32.068612   65727 main.go:141] libmachine: (auto-256954) Waiting to get IP...
	I1202 13:09:32.069419   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:32.069782   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find current IP address of domain auto-256954 in network mk-auto-256954
	I1202 13:09:32.069805   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:32.069765   65749 retry.go:31] will retry after 259.767955ms: waiting for machine to come up
	I1202 13:09:32.331175   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:32.331887   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find current IP address of domain auto-256954 in network mk-auto-256954
	I1202 13:09:32.331913   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:32.331823   65749 retry.go:31] will retry after 254.347577ms: waiting for machine to come up
	I1202 13:09:32.588370   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:32.588842   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find current IP address of domain auto-256954 in network mk-auto-256954
	I1202 13:09:32.588867   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:32.588803   65749 retry.go:31] will retry after 419.207641ms: waiting for machine to come up
	I1202 13:09:33.009243   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:33.009705   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find current IP address of domain auto-256954 in network mk-auto-256954
	I1202 13:09:33.009741   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:33.009658   65749 retry.go:31] will retry after 465.796278ms: waiting for machine to come up
	I1202 13:09:33.477281   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:33.477804   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find current IP address of domain auto-256954 in network mk-auto-256954
	I1202 13:09:33.477830   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:33.477735   65749 retry.go:31] will retry after 712.100051ms: waiting for machine to come up
	I1202 13:09:34.191354   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:34.191898   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find current IP address of domain auto-256954 in network mk-auto-256954
	I1202 13:09:34.191922   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:34.191861   65749 retry.go:31] will retry after 930.924837ms: waiting for machine to come up
	I1202 13:09:35.123896   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:35.124541   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find current IP address of domain auto-256954 in network mk-auto-256954
	I1202 13:09:35.124568   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:35.124451   65749 retry.go:31] will retry after 750.630626ms: waiting for machine to come up
	I1202 13:09:35.877021   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:35.877518   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find current IP address of domain auto-256954 in network mk-auto-256954
	I1202 13:09:35.877546   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:35.877469   65749 retry.go:31] will retry after 1.292231426s: waiting for machine to come up
	I1202 13:09:37.171971   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:37.172416   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find current IP address of domain auto-256954 in network mk-auto-256954
	I1202 13:09:37.172705   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:37.172507   65749 retry.go:31] will retry after 1.349299439s: waiting for machine to come up
	I1202 13:09:38.523168   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:38.523545   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find current IP address of domain auto-256954 in network mk-auto-256954
	I1202 13:09:38.523597   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:38.523527   65749 retry.go:31] will retry after 2.010782979s: waiting for machine to come up
	I1202 13:09:40.535482   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:40.536045   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find current IP address of domain auto-256954 in network mk-auto-256954
	I1202 13:09:40.536073   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:40.536000   65749 retry.go:31] will retry after 2.528292339s: waiting for machine to come up
	I1202 13:09:43.066329   65727 main.go:141] libmachine: (auto-256954) DBG | domain auto-256954 has defined MAC address 52:54:00:e3:c5:a1 in network mk-auto-256954
	I1202 13:09:43.066838   65727 main.go:141] libmachine: (auto-256954) DBG | unable to find current IP address of domain auto-256954 in network mk-auto-256954
	I1202 13:09:43.066867   65727 main.go:141] libmachine: (auto-256954) DBG | I1202 13:09:43.066804   65749 retry.go:31] will retry after 3.024706697s: waiting for machine to come up
	
	
	==> CRI-O <==
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.477279604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144988477257592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f53eed04-b832-40aa-90a1-303d0ad7eeb9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.477804009Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2562452a-56c6-4661-b33d-92e0c3012ab7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.477867735Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2562452a-56c6-4661-b33d-92e0c3012ab7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.478065668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733143839653296037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe13c277f44c13f62eb843afd2db76c7c0876400f205380144f78aa60c5620c,PodSandboxId:ea70db85389dfcf194d3f477d2cc219dc2c8c1c2f156f85fb68dbd1022178a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733143818678816326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec4496d6-f7d8-49db-9c91-99516b484a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35,PodSandboxId:4f7cd59c9e868cc8b35b8fcb5976711dae2117c905fdb34bd96e3d5ab08fea70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733143816511836511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvfc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88088d1-7d48-498a-8251-f3a9ff436583,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733143808956506822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85,PodSandboxId:e275084c32adb91a4b8be9593d71fdf31e183ea10b206f24305395b0578054e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733143808797931840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2xf6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 477778b7-12f0-4055-a583-edbf84c1a6
35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac,PodSandboxId:82bddb51e45f22fb39928422acac285ce825922d9db70813e8268bcbaee1aef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733143804128050556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 855950d9f38a59d78035922ca1f3f8e6,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4,PodSandboxId:d33a23bb21be2848996924d4d742ce9839e14f9fb871b3e33b534af1e012cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733143804074149085,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2956692446e925286f1f6deecc6075de,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7,PodSandboxId:1e4aaaa1c5f787068a3733dc3c7bceffbaa8c4c11d449fc14a7edf58242265d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733143804055047814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3047d2cbb0870e4faeaf39a24d235d8,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14,PodSandboxId:420a4aaa23c692127f204cb4a4ac8cab87b7a1bb252e0266b3e06e055eab2183,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733143804047118684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590f19d283bc4650c93f732fced32457,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2562452a-56c6-4661-b33d-92e0c3012ab7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.518422340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b0ebb83-f263-481a-a57c-16aa81507acd name=/runtime.v1.RuntimeService/Version
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.518511437Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b0ebb83-f263-481a-a57c-16aa81507acd name=/runtime.v1.RuntimeService/Version
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.519930545Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c11d8f28-081c-43bf-885e-501124832eec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.520343294Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144988520322344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c11d8f28-081c-43bf-885e-501124832eec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.520727897Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99a6a4a5-f0df-4b82-92bc-e3df30f871d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.520846899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99a6a4a5-f0df-4b82-92bc-e3df30f871d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.521087115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733143839653296037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe13c277f44c13f62eb843afd2db76c7c0876400f205380144f78aa60c5620c,PodSandboxId:ea70db85389dfcf194d3f477d2cc219dc2c8c1c2f156f85fb68dbd1022178a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733143818678816326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec4496d6-f7d8-49db-9c91-99516b484a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35,PodSandboxId:4f7cd59c9e868cc8b35b8fcb5976711dae2117c905fdb34bd96e3d5ab08fea70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733143816511836511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvfc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88088d1-7d48-498a-8251-f3a9ff436583,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733143808956506822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85,PodSandboxId:e275084c32adb91a4b8be9593d71fdf31e183ea10b206f24305395b0578054e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733143808797931840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2xf6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 477778b7-12f0-4055-a583-edbf84c1a6
35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac,PodSandboxId:82bddb51e45f22fb39928422acac285ce825922d9db70813e8268bcbaee1aef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733143804128050556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 855950d9f38a59d78035922ca1f3f8e6,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4,PodSandboxId:d33a23bb21be2848996924d4d742ce9839e14f9fb871b3e33b534af1e012cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733143804074149085,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2956692446e925286f1f6deecc6075de,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7,PodSandboxId:1e4aaaa1c5f787068a3733dc3c7bceffbaa8c4c11d449fc14a7edf58242265d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733143804055047814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3047d2cbb0870e4faeaf39a24d235d8,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14,PodSandboxId:420a4aaa23c692127f204cb4a4ac8cab87b7a1bb252e0266b3e06e055eab2183,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733143804047118684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590f19d283bc4650c93f732fced32457,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99a6a4a5-f0df-4b82-92bc-e3df30f871d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.557167229Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ff6435e-43eb-4698-9e0f-93c2df8a8398 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.557248352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ff6435e-43eb-4698-9e0f-93c2df8a8398 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.558504423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2dbdcbc-4e20-4aea-bd6a-dc39c3bab0ab name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.558911563Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144988558884750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2dbdcbc-4e20-4aea-bd6a-dc39c3bab0ab name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.559446620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a32ffef-7593-4123-ac34-7e6f1ae98923 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.559526330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a32ffef-7593-4123-ac34-7e6f1ae98923 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.559712383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733143839653296037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe13c277f44c13f62eb843afd2db76c7c0876400f205380144f78aa60c5620c,PodSandboxId:ea70db85389dfcf194d3f477d2cc219dc2c8c1c2f156f85fb68dbd1022178a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733143818678816326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec4496d6-f7d8-49db-9c91-99516b484a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35,PodSandboxId:4f7cd59c9e868cc8b35b8fcb5976711dae2117c905fdb34bd96e3d5ab08fea70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733143816511836511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvfc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88088d1-7d48-498a-8251-f3a9ff436583,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733143808956506822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85,PodSandboxId:e275084c32adb91a4b8be9593d71fdf31e183ea10b206f24305395b0578054e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733143808797931840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2xf6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 477778b7-12f0-4055-a583-edbf84c1a6
35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac,PodSandboxId:82bddb51e45f22fb39928422acac285ce825922d9db70813e8268bcbaee1aef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733143804128050556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 855950d9f38a59d78035922ca1f3f8e6,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4,PodSandboxId:d33a23bb21be2848996924d4d742ce9839e14f9fb871b3e33b534af1e012cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733143804074149085,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2956692446e925286f1f6deecc6075de,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7,PodSandboxId:1e4aaaa1c5f787068a3733dc3c7bceffbaa8c4c11d449fc14a7edf58242265d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733143804055047814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3047d2cbb0870e4faeaf39a24d235d8,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14,PodSandboxId:420a4aaa23c692127f204cb4a4ac8cab87b7a1bb252e0266b3e06e055eab2183,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733143804047118684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590f19d283bc4650c93f732fced32457,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a32ffef-7593-4123-ac34-7e6f1ae98923 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.594709519Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3535844-b192-44fd-ba94-0baf5f252869 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.594841601Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3535844-b192-44fd-ba94-0baf5f252869 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.595869486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a72cda6-2ce1-4bab-a6ff-2a8476b001ba name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.596242504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144988596222474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a72cda6-2ce1-4bab-a6ff-2a8476b001ba name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.596689070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00120405-c513-40d7-b5b4-62619ba71e95 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.596804495Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00120405-c513-40d7-b5b4-62619ba71e95 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:48 no-preload-658679 crio[712]: time="2024-12-02 13:09:48.597217541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733143839653296037,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fe13c277f44c13f62eb843afd2db76c7c0876400f205380144f78aa60c5620c,PodSandboxId:ea70db85389dfcf194d3f477d2cc219dc2c8c1c2f156f85fb68dbd1022178a33,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733143818678816326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ec4496d6-f7d8-49db-9c91-99516b484a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35,PodSandboxId:4f7cd59c9e868cc8b35b8fcb5976711dae2117c905fdb34bd96e3d5ab08fea70,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733143816511836511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cvfc9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f88088d1-7d48-498a-8251-f3a9ff436583,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c,PodSandboxId:a84cbf3acc7fe3b3c9e4e0b4b382ebf49a5e30892668b51800fca3fc2835b902,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733143808956506822,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
736f43f-6d15-41de-856a-048887f08742,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85,PodSandboxId:e275084c32adb91a4b8be9593d71fdf31e183ea10b206f24305395b0578054e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733143808797931840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2xf6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 477778b7-12f0-4055-a583-edbf84c1a6
35,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac,PodSandboxId:82bddb51e45f22fb39928422acac285ce825922d9db70813e8268bcbaee1aef7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733143804128050556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 855950d9f38a59d78035922ca1f3f8e6,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4,PodSandboxId:d33a23bb21be2848996924d4d742ce9839e14f9fb871b3e33b534af1e012cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733143804074149085,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2956692446e925286f1f6deecc6075de,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7,PodSandboxId:1e4aaaa1c5f787068a3733dc3c7bceffbaa8c4c11d449fc14a7edf58242265d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733143804055047814,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3047d2cbb0870e4faeaf39a24d235d8,},Annotations:map[string]string{io.kubernetes.container.hash: c692
7529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14,PodSandboxId:420a4aaa23c692127f204cb4a4ac8cab87b7a1bb252e0266b3e06e055eab2183,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733143804047118684,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-658679,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590f19d283bc4650c93f732fced32457,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00120405-c513-40d7-b5b4-62619ba71e95 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b120e768c4ec7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   a84cbf3acc7fe       storage-provisioner
	7fe13c277f44c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   ea70db85389df       busybox
	7db2e67ce7bdd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   4f7cd59c9e868       coredns-7c65d6cfc9-cvfc9
	ff4595631eef7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   a84cbf3acc7fe       storage-provisioner
	15d09a46ff041       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      19 minutes ago      Running             kube-proxy                1                   e275084c32adb       kube-proxy-2xf6j
	460259371c977       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   82bddb51e45f2       etcd-no-preload-658679
	0c490584031d2       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      19 minutes ago      Running             kube-scheduler            1                   d33a23bb21be2       kube-scheduler-no-preload-658679
	d8d62b779a876       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      19 minutes ago      Running             kube-apiserver            1                   1e4aaaa1c5f78       kube-apiserver-no-preload-658679
	316b371ddf0b0       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      19 minutes ago      Running             kube-controller-manager   1                   420a4aaa23c69       kube-controller-manager-no-preload-658679
	
	
	==> coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44557 - 28529 "HINFO IN 3661014269720602643.8108251855968392496. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008212656s
	
	
	==> describe nodes <==
	Name:               no-preload-658679
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-658679
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=no-preload-658679
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T12_40_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 12:40:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-658679
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 13:09:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 13:05:57 +0000   Mon, 02 Dec 2024 12:40:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 13:05:57 +0000   Mon, 02 Dec 2024 12:40:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 13:05:57 +0000   Mon, 02 Dec 2024 12:40:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 13:05:57 +0000   Mon, 02 Dec 2024 12:50:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.205
	  Hostname:    no-preload-658679
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2135076092b403ab0b57f9cee8abe8c
	  System UUID:                b2135076-092b-403a-b0b5-7f9cee8abe8c
	  Boot ID:                    059e703a-4f31-4023-a8da-070b32d9c155
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-cvfc9                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-658679                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-658679             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-658679    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-2xf6j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-658679             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-sn7tq              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-658679 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-658679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-658679 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-658679 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-658679 event: Registered Node no-preload-658679 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-658679 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-658679 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-658679 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-658679 event: Registered Node no-preload-658679 in Controller
	
	
	==> dmesg <==
	[Dec 2 12:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060264] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047306] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.213302] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.819622] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.653615] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.564792] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.057247] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052124] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.168963] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.142963] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.275481] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[Dec 2 12:50] systemd-fstab-generator[1313]: Ignoring "noauto" option for root device
	[  +0.062218] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.749457] systemd-fstab-generator[1436]: Ignoring "noauto" option for root device
	[  +4.736730] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.366144] systemd-fstab-generator[2084]: Ignoring "noauto" option for root device
	[  +3.217855] kauditd_printk_skb: 61 callbacks suppressed
	[ +25.223422] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] <==
	{"level":"info","ts":"2024-12-02T12:50:04.673554Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2835eac8f11eb509","local-member-id":"51448055b6368d24","added-peer-id":"51448055b6368d24","added-peer-peer-urls":["https://192.168.61.205:2380"]}
	{"level":"info","ts":"2024-12-02T12:50:04.674057Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2835eac8f11eb509","local-member-id":"51448055b6368d24","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:50:04.674209Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T12:50:06.485363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-02T12:50:06.485432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-02T12:50:06.485455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 received MsgPreVoteResp from 51448055b6368d24 at term 2"}
	{"level":"info","ts":"2024-12-02T12:50:06.485471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 became candidate at term 3"}
	{"level":"info","ts":"2024-12-02T12:50:06.485477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 received MsgVoteResp from 51448055b6368d24 at term 3"}
	{"level":"info","ts":"2024-12-02T12:50:06.485490Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"51448055b6368d24 became leader at term 3"}
	{"level":"info","ts":"2024-12-02T12:50:06.485498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 51448055b6368d24 elected leader 51448055b6368d24 at term 3"}
	{"level":"info","ts":"2024-12-02T12:50:06.497974Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T12:50:06.498257Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T12:50:06.497988Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"51448055b6368d24","local-member-attributes":"{Name:no-preload-658679 ClientURLs:[https://192.168.61.205:2379]}","request-path":"/0/members/51448055b6368d24/attributes","cluster-id":"2835eac8f11eb509","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-02T12:50:06.498644Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-02T12:50:06.498700Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-02T12:50:06.499188Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-02T12:50:06.499436Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-02T12:50:06.500004Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.205:2379"}
	{"level":"info","ts":"2024-12-02T12:50:06.500622Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-02T13:00:06.526581Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":819}
	{"level":"info","ts":"2024-12-02T13:00:06.536146Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":819,"took":"8.756355ms","hash":3605462655,"current-db-size-bytes":2715648,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2715648,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-12-02T13:00:06.536231Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3605462655,"revision":819,"compact-revision":-1}
	{"level":"info","ts":"2024-12-02T13:05:06.534090Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1062}
	{"level":"info","ts":"2024-12-02T13:05:06.540357Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1062,"took":"5.839247ms","hash":390161843,"current-db-size-bytes":2715648,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1617920,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-02T13:05:06.540424Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":390161843,"revision":1062,"compact-revision":819}
	
	
	==> kernel <==
	 13:09:48 up 20 min,  0 users,  load average: 0.05, 0.10, 0.09
	Linux no-preload-658679 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1202 13:05:08.825920       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:05:08.826017       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1202 13:05:08.827032       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:05:08.827051       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 13:06:08.827684       1 handler_proxy.go:99] no RequestInfo found in the context
	W1202 13:06:08.827918       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:06:08.828097       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1202 13:06:08.828027       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1202 13:06:08.830137       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:06:08.830178       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 13:08:08.830866       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:08:08.831190       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1202 13:08:08.831251       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:08:08.831281       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1202 13:08:08.832444       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:08:08.832507       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] <==
	E1202 13:04:41.535536       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:04:42.023029       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:05:11.542885       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:05:12.032650       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:05:41.549321       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:05:42.042594       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:05:57.546105       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-658679"
	E1202 13:06:11.555573       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:06:12.050071       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:06:29.449268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="278.809µs"
	I1202 13:06:41.447695       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="159.459µs"
	E1202 13:06:41.562338       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:06:42.057628       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:07:11.568807       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:07:12.065124       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:07:41.575375       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:07:42.075651       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:08:11.583618       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:08:12.082925       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:08:41.590130       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:08:42.091123       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:09:11.597368       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:09:12.098274       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:09:41.604301       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:09:42.107733       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1202 12:50:09.088670       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1202 12:50:09.097673       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.205"]
	E1202 12:50:09.097840       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 12:50:09.138251       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1202 12:50:09.138384       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 12:50:09.138438       1 server_linux.go:169] "Using iptables Proxier"
	I1202 12:50:09.142258       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 12:50:09.142561       1 server.go:483] "Version info" version="v1.31.2"
	I1202 12:50:09.142711       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 12:50:09.144695       1 config.go:199] "Starting service config controller"
	I1202 12:50:09.144740       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 12:50:09.144843       1 config.go:105] "Starting endpoint slice config controller"
	I1202 12:50:09.144861       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 12:50:09.145310       1 config.go:328] "Starting node config controller"
	I1202 12:50:09.146502       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 12:50:09.245215       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 12:50:09.245241       1 shared_informer.go:320] Caches are synced for service config
	I1202 12:50:09.246735       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] <==
	I1202 12:50:05.351873       1 serving.go:386] Generated self-signed cert in-memory
	W1202 12:50:07.733726       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1202 12:50:07.733822       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1202 12:50:07.733832       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1202 12:50:07.733843       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1202 12:50:07.824166       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1202 12:50:07.826839       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 12:50:07.833097       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1202 12:50:07.834877       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 12:50:07.834926       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1202 12:50:07.835042       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1202 12:50:07.936050       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 13:08:42 no-preload-658679 kubelet[1443]: E1202 13:08:42.432284    1443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sn7tq" podUID="8171d626-7036-4585-a967-8ff54f00cfc8"
	Dec 02 13:08:43 no-preload-658679 kubelet[1443]: E1202 13:08:43.698342    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144923698040690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:08:43 no-preload-658679 kubelet[1443]: E1202 13:08:43.698386    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144923698040690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:08:53 no-preload-658679 kubelet[1443]: E1202 13:08:53.699903    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144933699233077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:08:53 no-preload-658679 kubelet[1443]: E1202 13:08:53.699946    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144933699233077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:08:56 no-preload-658679 kubelet[1443]: E1202 13:08:56.431910    1443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sn7tq" podUID="8171d626-7036-4585-a967-8ff54f00cfc8"
	Dec 02 13:09:03 no-preload-658679 kubelet[1443]: E1202 13:09:03.450502    1443 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 02 13:09:03 no-preload-658679 kubelet[1443]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 13:09:03 no-preload-658679 kubelet[1443]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 13:09:03 no-preload-658679 kubelet[1443]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 13:09:03 no-preload-658679 kubelet[1443]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 13:09:03 no-preload-658679 kubelet[1443]: E1202 13:09:03.701229    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144943700841259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:09:03 no-preload-658679 kubelet[1443]: E1202 13:09:03.701255    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144943700841259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:09:07 no-preload-658679 kubelet[1443]: E1202 13:09:07.432131    1443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sn7tq" podUID="8171d626-7036-4585-a967-8ff54f00cfc8"
	Dec 02 13:09:13 no-preload-658679 kubelet[1443]: E1202 13:09:13.704101    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144953703602703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:09:13 no-preload-658679 kubelet[1443]: E1202 13:09:13.704638    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144953703602703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:09:22 no-preload-658679 kubelet[1443]: E1202 13:09:22.432213    1443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sn7tq" podUID="8171d626-7036-4585-a967-8ff54f00cfc8"
	Dec 02 13:09:23 no-preload-658679 kubelet[1443]: E1202 13:09:23.707359    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144963707151395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:09:23 no-preload-658679 kubelet[1443]: E1202 13:09:23.707380    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144963707151395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:09:33 no-preload-658679 kubelet[1443]: E1202 13:09:33.709549    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144973709128377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:09:33 no-preload-658679 kubelet[1443]: E1202 13:09:33.710184    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144973709128377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:09:37 no-preload-658679 kubelet[1443]: E1202 13:09:37.433227    1443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sn7tq" podUID="8171d626-7036-4585-a967-8ff54f00cfc8"
	Dec 02 13:09:43 no-preload-658679 kubelet[1443]: E1202 13:09:43.712559    1443 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144983711876569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:09:43 no-preload-658679 kubelet[1443]: E1202 13:09:43.713041    1443 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144983711876569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:09:48 no-preload-658679 kubelet[1443]: E1202 13:09:48.432169    1443 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-sn7tq" podUID="8171d626-7036-4585-a967-8ff54f00cfc8"
	
	
	==> storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] <==
	I1202 12:50:39.733363       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 12:50:39.742561       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 12:50:39.742657       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 12:50:39.749385       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 12:50:39.749540       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-658679_aee4fca7-ecbd-474c-9f02-2e66a09e3bcf!
	I1202 12:50:39.750352       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a29abcb-c5c7-4502-917f-abd7d8e4a569", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-658679_aee4fca7-ecbd-474c-9f02-2e66a09e3bcf became leader
	I1202 12:50:39.850277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-658679_aee4fca7-ecbd-474c-9f02-2e66a09e3bcf!
	
	
	==> storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] <==
	I1202 12:50:09.073018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 12:50:39.076094       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-658679 -n no-preload-658679
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-658679 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-sn7tq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-658679 describe pod metrics-server-6867b74b74-sn7tq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-658679 describe pod metrics-server-6867b74b74-sn7tq: exit status 1 (61.446619ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-sn7tq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-658679 describe pod metrics-server-6867b74b74-sn7tq: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (374.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (174.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
E1202 13:07:49.237877   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.171:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.171:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666766 -n old-k8s-version-666766
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 2 (233.403509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-666766" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-666766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-666766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.35µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-666766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 2 (220.065466ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-666766 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p newest-cni-983490 --memory=2200 --alsologtostderr   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-953044            | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-983490             | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:42 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-983490                  | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-983490 --memory=2200 --alsologtostderr   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-658679                  | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-658679                                   | no-preload-658679            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-983490 image list                           | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	| delete  | -p newest-cni-983490                                   | newest-cni-983490            | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:43 UTC |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC | 02 Dec 24 12:49 UTC |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-666766        | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-953044                 | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-953044                                  | embed-certs-953044           | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666766             | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC | 02 Dec 24 12:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666766                              | old-k8s-version-666766       | jenkins | v1.34.0 | 02 Dec 24 12:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653783  | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC | 02 Dec 24 12:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:49 UTC |                     |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653783       | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653783 | jenkins | v1.34.0 | 02 Dec 24 12:51 UTC | 02 Dec 24 13:02 UTC |
	|         | default-k8s-diff-port-653783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 12:51:53
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 12:51:53.986642   61173 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:51:53.986878   61173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:51:53.986887   61173 out.go:358] Setting ErrFile to fd 2...
	I1202 12:51:53.986891   61173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:51:53.987040   61173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:51:53.987531   61173 out.go:352] Setting JSON to false
	I1202 12:51:53.988496   61173 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5666,"bootTime":1733138248,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:51:53.988587   61173 start.go:139] virtualization: kvm guest
	I1202 12:51:53.990552   61173 out.go:177] * [default-k8s-diff-port-653783] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:51:53.991681   61173 notify.go:220] Checking for updates...
	I1202 12:51:53.991692   61173 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:51:53.992827   61173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:51:53.993900   61173 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:51:53.995110   61173 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:51:53.996273   61173 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:51:53.997326   61173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:51:53.998910   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:51:53.999556   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:53.999630   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.014837   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I1202 12:51:54.015203   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.015691   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.015717   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.016024   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.016213   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.016420   61173 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:51:54.016702   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:54.016740   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.031103   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43443
	I1202 12:51:54.031480   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.031846   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.031862   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.032152   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.032313   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.066052   61173 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 12:51:54.067269   61173 start.go:297] selected driver: kvm2
	I1202 12:51:54.067282   61173 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:51:54.067398   61173 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:51:54.068083   61173 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:51:54.068159   61173 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 12:51:54.082839   61173 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 12:51:54.083361   61173 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:51:54.083405   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:51:54.083450   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:51:54.083491   61173 start.go:340] cluster config:
	{Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:51:54.083581   61173 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 12:51:54.085236   61173 out.go:177] * Starting "default-k8s-diff-port-653783" primary control-plane node in "default-k8s-diff-port-653783" cluster
	I1202 12:51:54.086247   61173 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:51:54.086275   61173 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 12:51:54.086281   61173 cache.go:56] Caching tarball of preloaded images
	I1202 12:51:54.086363   61173 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 12:51:54.086377   61173 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 12:51:54.086471   61173 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/config.json ...
	I1202 12:51:54.086683   61173 start.go:360] acquireMachinesLock for default-k8s-diff-port-653783: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:51:54.086721   61173 start.go:364] duration metric: took 21.68µs to acquireMachinesLock for "default-k8s-diff-port-653783"
	I1202 12:51:54.086742   61173 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:51:54.086750   61173 fix.go:54] fixHost starting: 
	I1202 12:51:54.087016   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:51:54.087049   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:51:54.100439   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I1202 12:51:54.100860   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:51:54.101284   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:51:54.101305   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:51:54.101699   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:51:54.101899   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.102027   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 12:51:54.103398   61173 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653783: state=Running err=<nil>
	W1202 12:51:54.103428   61173 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:51:54.104862   61173 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-653783" VM ...
	I1202 12:51:51.250214   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:53.251543   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:55.251624   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:54.384562   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:54.397979   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:54.398032   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:54.431942   59162 cri.go:89] found id: ""
	I1202 12:51:54.431965   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.431973   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:54.431979   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:54.432024   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:54.466033   59162 cri.go:89] found id: ""
	I1202 12:51:54.466054   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.466062   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:54.466067   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:54.466116   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:54.506462   59162 cri.go:89] found id: ""
	I1202 12:51:54.506486   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.506493   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:54.506499   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:54.506545   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:54.539966   59162 cri.go:89] found id: ""
	I1202 12:51:54.539996   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.540006   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:54.540013   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:54.540068   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:54.572987   59162 cri.go:89] found id: ""
	I1202 12:51:54.573027   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.573038   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:54.573046   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:54.573107   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:54.609495   59162 cri.go:89] found id: ""
	I1202 12:51:54.609528   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.609539   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:54.609547   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:54.609593   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:54.643109   59162 cri.go:89] found id: ""
	I1202 12:51:54.643136   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.643148   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:54.643205   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:54.643279   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:54.681113   59162 cri.go:89] found id: ""
	I1202 12:51:54.681151   59162 logs.go:282] 0 containers: []
	W1202 12:51:54.681160   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:54.681168   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:54.681180   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:54.734777   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:54.734806   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:54.748171   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:54.748196   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:54.821609   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:54.821628   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:54.821642   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:54.900306   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:54.900339   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:57.438971   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:51:57.454128   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:51:57.454187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:51:57.489852   59162 cri.go:89] found id: ""
	I1202 12:51:57.489877   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.489885   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:51:57.489890   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:51:57.489938   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:51:57.523496   59162 cri.go:89] found id: ""
	I1202 12:51:57.523515   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.523522   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:51:57.523528   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:51:57.523576   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:51:57.554394   59162 cri.go:89] found id: ""
	I1202 12:51:57.554417   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.554429   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:51:57.554436   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:51:57.554497   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:51:57.586259   59162 cri.go:89] found id: ""
	I1202 12:51:57.586281   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.586291   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:51:57.586298   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:51:57.586353   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:51:57.618406   59162 cri.go:89] found id: ""
	I1202 12:51:57.618427   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.618435   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:51:57.618440   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:51:57.618482   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:51:57.649491   59162 cri.go:89] found id: ""
	I1202 12:51:57.649517   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.649527   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:51:57.649532   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:51:57.649575   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:51:57.682286   59162 cri.go:89] found id: ""
	I1202 12:51:57.682306   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.682313   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:51:57.682319   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:51:57.682364   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:51:57.720929   59162 cri.go:89] found id: ""
	I1202 12:51:57.720956   59162 logs.go:282] 0 containers: []
	W1202 12:51:57.720967   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:51:57.720977   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:51:57.720987   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:51:57.802270   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:51:57.802302   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:57.841214   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:51:57.841246   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:51:57.893691   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:51:57.893724   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:51:57.906616   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:51:57.906640   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:51:57.973328   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:51:54.153852   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:56.653113   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:51:54.105934   61173 machine.go:93] provisionDockerMachine start ...
	I1202 12:51:54.105950   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:51:54.106120   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:51:54.108454   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:51:54.108866   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:48:33 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:51:54.108899   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:51:54.109032   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:51:54.109170   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:51:54.109328   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:51:54.109487   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:51:54.109662   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:51:54.109863   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:51:54.109875   61173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:51:57.012461   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:51:57.751276   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.250936   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.473500   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:00.487912   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:00.487973   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:00.526513   59162 cri.go:89] found id: ""
	I1202 12:52:00.526539   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.526548   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:00.526557   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:00.526620   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:00.561483   59162 cri.go:89] found id: ""
	I1202 12:52:00.561511   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.561519   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:00.561526   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:00.561583   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:00.592435   59162 cri.go:89] found id: ""
	I1202 12:52:00.592473   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.592484   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:00.592491   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:00.592551   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:00.624686   59162 cri.go:89] found id: ""
	I1202 12:52:00.624710   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.624722   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:00.624727   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:00.624771   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:00.662610   59162 cri.go:89] found id: ""
	I1202 12:52:00.662639   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.662650   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:00.662657   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:00.662721   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:00.695972   59162 cri.go:89] found id: ""
	I1202 12:52:00.695993   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.696000   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:00.696006   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:00.696048   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:00.727200   59162 cri.go:89] found id: ""
	I1202 12:52:00.727230   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.727253   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:00.727261   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:00.727316   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:00.761510   59162 cri.go:89] found id: ""
	I1202 12:52:00.761536   59162 logs.go:282] 0 containers: []
	W1202 12:52:00.761545   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:00.761556   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:00.761568   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:00.812287   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:00.812318   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:00.825282   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:00.825309   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:00.894016   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:00.894042   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:00.894065   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:00.972001   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:00.972034   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:51:59.152373   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:01.153532   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:03.653266   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:00.084529   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:02.751465   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:04.752349   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:03.512982   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:03.528814   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:03.528884   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:03.564137   59162 cri.go:89] found id: ""
	I1202 12:52:03.564159   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.564166   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:03.564173   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:03.564223   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:03.608780   59162 cri.go:89] found id: ""
	I1202 12:52:03.608811   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.608822   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:03.608829   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:03.608891   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:03.644906   59162 cri.go:89] found id: ""
	I1202 12:52:03.644943   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.644954   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:03.644978   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:03.645052   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:03.676732   59162 cri.go:89] found id: ""
	I1202 12:52:03.676754   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.676761   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:03.676767   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:03.676809   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:03.711338   59162 cri.go:89] found id: ""
	I1202 12:52:03.711362   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.711369   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:03.711375   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:03.711424   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:03.743657   59162 cri.go:89] found id: ""
	I1202 12:52:03.743682   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.743689   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:03.743694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:03.743737   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:03.777740   59162 cri.go:89] found id: ""
	I1202 12:52:03.777759   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.777766   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:03.777772   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:03.777818   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:03.811145   59162 cri.go:89] found id: ""
	I1202 12:52:03.811169   59162 logs.go:282] 0 containers: []
	W1202 12:52:03.811179   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:03.811190   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:03.811204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:03.862069   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:03.862093   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:03.875133   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:03.875164   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:03.947077   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:03.947102   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:03.947114   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:04.023458   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:04.023487   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:06.562323   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:06.577498   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:06.577556   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:06.613937   59162 cri.go:89] found id: ""
	I1202 12:52:06.613962   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.613970   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:06.613976   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:06.614023   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:06.647630   59162 cri.go:89] found id: ""
	I1202 12:52:06.647655   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.647662   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:06.647667   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:06.647711   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:06.683758   59162 cri.go:89] found id: ""
	I1202 12:52:06.683783   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.683793   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:06.683800   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:06.683861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:06.722664   59162 cri.go:89] found id: ""
	I1202 12:52:06.722686   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.722694   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:06.722699   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:06.722747   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:06.756255   59162 cri.go:89] found id: ""
	I1202 12:52:06.756280   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.756290   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:06.756296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:06.756340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:06.792350   59162 cri.go:89] found id: ""
	I1202 12:52:06.792376   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.792387   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:06.792394   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:06.792450   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:06.827259   59162 cri.go:89] found id: ""
	I1202 12:52:06.827289   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.827301   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:06.827308   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:06.827367   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:06.858775   59162 cri.go:89] found id: ""
	I1202 12:52:06.858795   59162 logs.go:282] 0 containers: []
	W1202 12:52:06.858802   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:06.858811   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:06.858821   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:06.911764   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:06.911795   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:06.925297   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:06.925326   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:06.993703   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:06.993730   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:06.993744   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:07.073657   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:07.073685   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:05.653526   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:08.152177   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:06.164438   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:07.251496   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.752479   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.611640   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:09.626141   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:09.626199   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:09.661406   59162 cri.go:89] found id: ""
	I1202 12:52:09.661425   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.661432   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:09.661439   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:09.661498   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:09.698145   59162 cri.go:89] found id: ""
	I1202 12:52:09.698173   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.698184   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:09.698191   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:09.698252   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:09.732150   59162 cri.go:89] found id: ""
	I1202 12:52:09.732178   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.732189   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:09.732197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:09.732261   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:09.768040   59162 cri.go:89] found id: ""
	I1202 12:52:09.768063   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.768070   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:09.768076   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:09.768130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:09.801038   59162 cri.go:89] found id: ""
	I1202 12:52:09.801064   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.801075   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:09.801082   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:09.801130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:09.841058   59162 cri.go:89] found id: ""
	I1202 12:52:09.841082   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.841089   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:09.841095   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:09.841137   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:09.885521   59162 cri.go:89] found id: ""
	I1202 12:52:09.885541   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.885548   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:09.885554   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:09.885602   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:09.924759   59162 cri.go:89] found id: ""
	I1202 12:52:09.924779   59162 logs.go:282] 0 containers: []
	W1202 12:52:09.924786   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:09.924793   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:09.924804   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:09.968241   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:09.968273   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:10.020282   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:10.020315   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:10.036491   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:10.036519   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:10.113297   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:10.113324   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:10.113339   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:12.688410   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:12.705296   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:12.705356   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:12.743097   59162 cri.go:89] found id: ""
	I1202 12:52:12.743119   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.743127   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:12.743133   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:12.743187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:12.778272   59162 cri.go:89] found id: ""
	I1202 12:52:12.778292   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.778299   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:12.778304   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:12.778365   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:12.816087   59162 cri.go:89] found id: ""
	I1202 12:52:12.816116   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.816127   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:12.816134   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:12.816187   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:12.850192   59162 cri.go:89] found id: ""
	I1202 12:52:12.850214   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.850221   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:12.850227   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:12.850282   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:12.883325   59162 cri.go:89] found id: ""
	I1202 12:52:12.883351   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.883360   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:12.883367   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:12.883427   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:12.916121   59162 cri.go:89] found id: ""
	I1202 12:52:12.916157   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.916169   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:12.916176   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:12.916251   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:12.946704   59162 cri.go:89] found id: ""
	I1202 12:52:12.946733   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.946746   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:12.946753   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:12.946802   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:12.979010   59162 cri.go:89] found id: ""
	I1202 12:52:12.979041   59162 logs.go:282] 0 containers: []
	W1202 12:52:12.979050   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:12.979062   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:12.979075   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:13.062141   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:13.062171   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:13.111866   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:13.111900   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:13.162470   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:13.162498   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:13.178497   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:13.178525   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:13.245199   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:10.152556   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:12.153087   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:09.236522   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:12.249938   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:14.750814   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:15.746327   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:15.760092   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:15.760160   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:15.797460   59162 cri.go:89] found id: ""
	I1202 12:52:15.797484   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.797495   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:15.797503   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:15.797563   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:15.829969   59162 cri.go:89] found id: ""
	I1202 12:52:15.829998   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.830009   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:15.830017   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:15.830072   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:15.862390   59162 cri.go:89] found id: ""
	I1202 12:52:15.862418   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.862428   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:15.862435   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:15.862484   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:15.895223   59162 cri.go:89] found id: ""
	I1202 12:52:15.895244   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.895251   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:15.895257   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:15.895311   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:15.933157   59162 cri.go:89] found id: ""
	I1202 12:52:15.933184   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.933192   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:15.933197   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:15.933245   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:15.964387   59162 cri.go:89] found id: ""
	I1202 12:52:15.964414   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.964425   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:15.964433   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:15.964487   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:15.996803   59162 cri.go:89] found id: ""
	I1202 12:52:15.996825   59162 logs.go:282] 0 containers: []
	W1202 12:52:15.996832   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:15.996837   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:15.996881   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:16.029364   59162 cri.go:89] found id: ""
	I1202 12:52:16.029394   59162 logs.go:282] 0 containers: []
	W1202 12:52:16.029402   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:16.029411   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:16.029422   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:16.098237   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:16.098264   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:16.098278   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:16.172386   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:16.172414   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:16.216899   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:16.216923   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:16.281565   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:16.281591   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:14.154258   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:16.652807   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:15.316450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:18.388460   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:16.751794   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:19.250295   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:18.796337   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:18.809573   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:18.809637   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:18.847965   59162 cri.go:89] found id: ""
	I1202 12:52:18.847991   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.847999   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:18.848004   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:18.848053   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:18.883714   59162 cri.go:89] found id: ""
	I1202 12:52:18.883741   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.883751   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:18.883758   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:18.883817   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:18.918581   59162 cri.go:89] found id: ""
	I1202 12:52:18.918605   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.918612   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:18.918617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:18.918672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:18.954394   59162 cri.go:89] found id: ""
	I1202 12:52:18.954426   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.954437   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:18.954443   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:18.954502   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:18.995321   59162 cri.go:89] found id: ""
	I1202 12:52:18.995347   59162 logs.go:282] 0 containers: []
	W1202 12:52:18.995355   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:18.995361   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:18.995423   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:19.034030   59162 cri.go:89] found id: ""
	I1202 12:52:19.034055   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.034066   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:19.034073   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:19.034130   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:19.073569   59162 cri.go:89] found id: ""
	I1202 12:52:19.073597   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.073609   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:19.073615   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:19.073662   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:19.112049   59162 cri.go:89] found id: ""
	I1202 12:52:19.112078   59162 logs.go:282] 0 containers: []
	W1202 12:52:19.112090   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:19.112100   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:19.112113   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:19.180480   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:19.180502   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:19.180516   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:19.258236   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:19.258264   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:19.299035   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:19.299053   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:19.352572   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:19.352602   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:21.866524   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:21.879286   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:21.879340   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:21.910463   59162 cri.go:89] found id: ""
	I1202 12:52:21.910489   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.910498   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:21.910504   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:21.910551   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:21.943130   59162 cri.go:89] found id: ""
	I1202 12:52:21.943157   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.943165   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:21.943171   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:21.943216   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:21.976969   59162 cri.go:89] found id: ""
	I1202 12:52:21.976990   59162 logs.go:282] 0 containers: []
	W1202 12:52:21.976997   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:21.977002   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:21.977055   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:22.022113   59162 cri.go:89] found id: ""
	I1202 12:52:22.022144   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.022153   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:22.022159   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:22.022218   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:22.057387   59162 cri.go:89] found id: ""
	I1202 12:52:22.057406   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.057413   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:22.057418   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:22.057459   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:22.089832   59162 cri.go:89] found id: ""
	I1202 12:52:22.089866   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.089892   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:22.089900   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:22.089960   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:22.121703   59162 cri.go:89] found id: ""
	I1202 12:52:22.121727   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.121735   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:22.121740   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:22.121789   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:22.155076   59162 cri.go:89] found id: ""
	I1202 12:52:22.155098   59162 logs.go:282] 0 containers: []
	W1202 12:52:22.155108   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:22.155117   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:22.155137   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:22.234831   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:22.234862   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:22.273912   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:22.273945   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:22.327932   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:22.327966   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:22.340890   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:22.340913   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:22.419371   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:19.153845   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:21.652993   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:23.653111   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:21.750980   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:24.250791   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:24.919868   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:24.935004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:24.935068   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:24.972438   59162 cri.go:89] found id: ""
	I1202 12:52:24.972466   59162 logs.go:282] 0 containers: []
	W1202 12:52:24.972474   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:24.972480   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:24.972525   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:25.009282   59162 cri.go:89] found id: ""
	I1202 12:52:25.009310   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.009320   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:25.009329   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:25.009391   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:25.043227   59162 cri.go:89] found id: ""
	I1202 12:52:25.043254   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.043262   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:25.043267   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:25.043318   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:25.079167   59162 cri.go:89] found id: ""
	I1202 12:52:25.079191   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.079198   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:25.079204   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:25.079263   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:25.110308   59162 cri.go:89] found id: ""
	I1202 12:52:25.110332   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.110340   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:25.110346   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:25.110388   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:25.143804   59162 cri.go:89] found id: ""
	I1202 12:52:25.143830   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.143840   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:25.143846   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:25.143903   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:25.178114   59162 cri.go:89] found id: ""
	I1202 12:52:25.178140   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.178147   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:25.178155   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:25.178204   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:25.212632   59162 cri.go:89] found id: ""
	I1202 12:52:25.212665   59162 logs.go:282] 0 containers: []
	W1202 12:52:25.212675   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:25.212684   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:25.212696   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:25.267733   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:25.267761   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:25.281025   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:25.281048   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:25.346497   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:25.346520   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:25.346531   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:25.437435   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:25.437469   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:27.979493   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:27.993542   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:27.993615   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:28.030681   59162 cri.go:89] found id: ""
	I1202 12:52:28.030705   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.030712   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:28.030718   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:28.030771   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:28.063991   59162 cri.go:89] found id: ""
	I1202 12:52:28.064019   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.064027   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:28.064032   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:28.064080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:28.097983   59162 cri.go:89] found id: ""
	I1202 12:52:28.098018   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.098029   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:28.098038   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:28.098098   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:28.131956   59162 cri.go:89] found id: ""
	I1202 12:52:28.131977   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.131987   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:28.131995   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:28.132071   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:28.170124   59162 cri.go:89] found id: ""
	I1202 12:52:28.170160   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.170171   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:28.170177   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:28.170238   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:28.203127   59162 cri.go:89] found id: ""
	I1202 12:52:28.203149   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.203157   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:28.203163   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:28.203216   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:28.240056   59162 cri.go:89] found id: ""
	I1202 12:52:28.240081   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.240088   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:28.240094   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:28.240142   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:28.276673   59162 cri.go:89] found id: ""
	I1202 12:52:28.276699   59162 logs.go:282] 0 containers: []
	W1202 12:52:28.276710   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:28.276720   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:28.276733   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:28.333435   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:28.333470   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:28.347465   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:28.347491   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:52:26.153244   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:28.153689   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:27.508437   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:26.250897   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:28.250951   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:30.252183   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	W1202 12:52:28.432745   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:28.432777   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:28.432792   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:28.515984   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:28.516017   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:31.057069   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:31.070021   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:31.070084   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:31.106501   59162 cri.go:89] found id: ""
	I1202 12:52:31.106530   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.106540   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:31.106547   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:31.106606   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:31.141190   59162 cri.go:89] found id: ""
	I1202 12:52:31.141219   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.141230   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:31.141238   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:31.141298   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:31.176050   59162 cri.go:89] found id: ""
	I1202 12:52:31.176077   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.176087   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:31.176099   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:31.176169   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:31.211740   59162 cri.go:89] found id: ""
	I1202 12:52:31.211769   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.211780   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:31.211786   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:31.211831   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:31.248949   59162 cri.go:89] found id: ""
	I1202 12:52:31.248974   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.248983   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:31.248990   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:31.249044   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:31.284687   59162 cri.go:89] found id: ""
	I1202 12:52:31.284709   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.284717   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:31.284723   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:31.284765   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:31.317972   59162 cri.go:89] found id: ""
	I1202 12:52:31.317997   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.318004   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:31.318010   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:31.318065   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:31.354866   59162 cri.go:89] found id: ""
	I1202 12:52:31.354893   59162 logs.go:282] 0 containers: []
	W1202 12:52:31.354904   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:31.354914   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:31.354927   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:31.425168   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:31.425191   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:31.425202   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:31.508169   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:31.508204   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:31.547193   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:31.547220   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:31.601864   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:31.601892   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:30.653415   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:33.153132   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:30.580471   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:32.752026   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:35.251960   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:34.115652   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:34.131644   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:34.131695   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:34.174473   59162 cri.go:89] found id: ""
	I1202 12:52:34.174500   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.174510   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:34.174518   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:34.174571   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:34.226162   59162 cri.go:89] found id: ""
	I1202 12:52:34.226190   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.226201   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:34.226208   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:34.226271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:34.269202   59162 cri.go:89] found id: ""
	I1202 12:52:34.269230   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.269240   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:34.269248   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:34.269327   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:34.304571   59162 cri.go:89] found id: ""
	I1202 12:52:34.304604   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.304615   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:34.304621   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:34.304670   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:34.339285   59162 cri.go:89] found id: ""
	I1202 12:52:34.339316   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.339327   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:34.339334   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:34.339401   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:34.374919   59162 cri.go:89] found id: ""
	I1202 12:52:34.374952   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.374964   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:34.374973   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:34.375035   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:34.409292   59162 cri.go:89] found id: ""
	I1202 12:52:34.409319   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.409330   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:34.409337   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:34.409404   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:34.442536   59162 cri.go:89] found id: ""
	I1202 12:52:34.442561   59162 logs.go:282] 0 containers: []
	W1202 12:52:34.442568   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:34.442576   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:34.442587   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:34.494551   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:34.494582   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:34.508684   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:34.508713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:34.572790   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:34.572816   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:34.572835   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:34.649327   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:34.649358   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:37.190648   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:37.203913   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:37.203966   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:37.243165   59162 cri.go:89] found id: ""
	I1202 12:52:37.243186   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.243194   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:37.243199   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:37.243246   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:37.279317   59162 cri.go:89] found id: ""
	I1202 12:52:37.279343   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.279351   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:37.279356   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:37.279411   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:37.312655   59162 cri.go:89] found id: ""
	I1202 12:52:37.312684   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.312693   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:37.312702   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:37.312748   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:37.346291   59162 cri.go:89] found id: ""
	I1202 12:52:37.346319   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.346328   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:37.346334   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:37.346382   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:37.381534   59162 cri.go:89] found id: ""
	I1202 12:52:37.381555   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.381563   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:37.381569   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:37.381621   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:37.416990   59162 cri.go:89] found id: ""
	I1202 12:52:37.417013   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.417020   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:37.417026   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:37.417083   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:37.451149   59162 cri.go:89] found id: ""
	I1202 12:52:37.451174   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.451182   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:37.451187   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:37.451233   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:37.485902   59162 cri.go:89] found id: ""
	I1202 12:52:37.485929   59162 logs.go:282] 0 containers: []
	W1202 12:52:37.485940   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:37.485950   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:37.485970   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:37.541615   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:37.541645   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:37.554846   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:37.554866   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:37.622432   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:37.622457   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:37.622471   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:37.708793   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:37.708832   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:35.154170   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:37.653220   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:36.660437   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:37.751726   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:40.252016   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:40.246822   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:40.260893   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:40.260959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:40.294743   59162 cri.go:89] found id: ""
	I1202 12:52:40.294773   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.294782   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:40.294789   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:40.294845   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:40.338523   59162 cri.go:89] found id: ""
	I1202 12:52:40.338557   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.338570   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:40.338577   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:40.338628   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:40.373134   59162 cri.go:89] found id: ""
	I1202 12:52:40.373162   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.373170   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:40.373176   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:40.373225   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:40.410197   59162 cri.go:89] found id: ""
	I1202 12:52:40.410233   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.410247   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:40.410256   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:40.410333   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:40.442497   59162 cri.go:89] found id: ""
	I1202 12:52:40.442521   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.442530   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:40.442536   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:40.442597   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:40.477835   59162 cri.go:89] found id: ""
	I1202 12:52:40.477863   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.477872   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:40.477879   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:40.477936   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:40.511523   59162 cri.go:89] found id: ""
	I1202 12:52:40.511547   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.511559   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:40.511567   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:40.511628   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:40.545902   59162 cri.go:89] found id: ""
	I1202 12:52:40.545928   59162 logs.go:282] 0 containers: []
	W1202 12:52:40.545942   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:40.545962   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:40.545976   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:40.595638   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:40.595669   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:40.609023   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:40.609043   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:40.680826   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:40.680848   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:40.680866   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:40.756551   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:40.756579   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:43.295761   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:43.308764   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:43.308836   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:43.343229   59162 cri.go:89] found id: ""
	I1202 12:52:43.343258   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.343268   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:43.343276   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:43.343335   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:39.653604   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:42.152871   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:39.732455   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:42.750873   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:45.250740   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:43.376841   59162 cri.go:89] found id: ""
	I1202 12:52:43.376861   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.376868   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:43.376874   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:43.376918   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:43.415013   59162 cri.go:89] found id: ""
	I1202 12:52:43.415033   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.415041   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:43.415046   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:43.415094   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:43.451563   59162 cri.go:89] found id: ""
	I1202 12:52:43.451590   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.451601   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:43.451608   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:43.451658   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:43.492838   59162 cri.go:89] found id: ""
	I1202 12:52:43.492859   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.492867   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:43.492872   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:43.492934   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:43.531872   59162 cri.go:89] found id: ""
	I1202 12:52:43.531898   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.531908   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:43.531914   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:43.531957   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:43.566235   59162 cri.go:89] found id: ""
	I1202 12:52:43.566260   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.566270   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:43.566277   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:43.566332   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:43.601502   59162 cri.go:89] found id: ""
	I1202 12:52:43.601531   59162 logs.go:282] 0 containers: []
	W1202 12:52:43.601542   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:43.601553   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:43.601567   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:43.650984   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:43.651012   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:43.664273   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:43.664296   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:43.735791   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:43.735819   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:43.735833   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:43.817824   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:43.817861   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:46.356130   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:46.368755   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:46.368835   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:46.404552   59162 cri.go:89] found id: ""
	I1202 12:52:46.404574   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.404582   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:46.404588   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:46.404640   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:46.438292   59162 cri.go:89] found id: ""
	I1202 12:52:46.438318   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.438329   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:46.438337   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:46.438397   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:46.471614   59162 cri.go:89] found id: ""
	I1202 12:52:46.471636   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.471643   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:46.471649   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:46.471752   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:46.502171   59162 cri.go:89] found id: ""
	I1202 12:52:46.502193   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.502201   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:46.502207   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:46.502250   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:46.533820   59162 cri.go:89] found id: ""
	I1202 12:52:46.533842   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.533851   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:46.533859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:46.533914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:46.566891   59162 cri.go:89] found id: ""
	I1202 12:52:46.566918   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.566928   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:46.566936   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:46.566980   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:46.599112   59162 cri.go:89] found id: ""
	I1202 12:52:46.599143   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.599154   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:46.599161   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:46.599215   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:46.630794   59162 cri.go:89] found id: ""
	I1202 12:52:46.630837   59162 logs.go:282] 0 containers: []
	W1202 12:52:46.630849   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:46.630860   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:46.630876   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:46.644180   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:46.644210   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:46.705881   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:46.705921   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:46.705936   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:46.781327   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:46.781359   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:46.820042   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:46.820072   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:44.654330   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:47.152273   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:45.816427   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:48.884464   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:47.751118   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:49.752726   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:49.368930   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:49.381506   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:49.381556   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:49.417928   59162 cri.go:89] found id: ""
	I1202 12:52:49.417955   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.417965   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:49.417977   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:49.418034   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:49.450248   59162 cri.go:89] found id: ""
	I1202 12:52:49.450276   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.450286   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:49.450295   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:49.450366   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:49.484288   59162 cri.go:89] found id: ""
	I1202 12:52:49.484311   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.484318   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:49.484323   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:49.484372   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:49.518565   59162 cri.go:89] found id: ""
	I1202 12:52:49.518585   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.518595   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:49.518602   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:49.518650   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:49.552524   59162 cri.go:89] found id: ""
	I1202 12:52:49.552549   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.552556   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:49.552561   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:49.552609   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:49.586570   59162 cri.go:89] found id: ""
	I1202 12:52:49.586599   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.586610   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:49.586617   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:49.586672   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:49.622561   59162 cri.go:89] found id: ""
	I1202 12:52:49.622590   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.622601   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:49.622609   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:49.622666   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:49.659092   59162 cri.go:89] found id: ""
	I1202 12:52:49.659117   59162 logs.go:282] 0 containers: []
	W1202 12:52:49.659129   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:49.659152   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:49.659170   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:49.672461   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:49.672491   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:49.738609   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:49.738637   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:49.738670   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:49.820458   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:49.820488   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:49.860240   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:49.860269   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:52.411571   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:52.425037   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:52.425106   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:52.458215   59162 cri.go:89] found id: ""
	I1202 12:52:52.458244   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.458255   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:52.458262   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:52.458316   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:52.491781   59162 cri.go:89] found id: ""
	I1202 12:52:52.491809   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.491820   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:52.491827   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:52.491879   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:52.528829   59162 cri.go:89] found id: ""
	I1202 12:52:52.528855   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.528864   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:52.528870   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:52.528914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:52.560930   59162 cri.go:89] found id: ""
	I1202 12:52:52.560957   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.560965   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:52.560971   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:52.561021   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:52.594102   59162 cri.go:89] found id: ""
	I1202 12:52:52.594139   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.594152   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:52.594160   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:52.594222   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:52.627428   59162 cri.go:89] found id: ""
	I1202 12:52:52.627452   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.627460   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:52.627465   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:52.627529   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:52.659143   59162 cri.go:89] found id: ""
	I1202 12:52:52.659167   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.659175   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:52.659180   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:52.659230   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:52.691603   59162 cri.go:89] found id: ""
	I1202 12:52:52.691625   59162 logs.go:282] 0 containers: []
	W1202 12:52:52.691632   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:52.691640   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:52.691651   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:52.741989   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:52.742016   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:52.755769   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:52.755790   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:52.826397   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:52.826418   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:52.826431   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:52.904705   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:52.904734   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:49.653476   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:52.152372   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:51.755127   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:54.252182   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:55.449363   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:55.462294   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:55.462350   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:55.500829   59162 cri.go:89] found id: ""
	I1202 12:52:55.500856   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.500865   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:55.500871   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:55.500927   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:55.533890   59162 cri.go:89] found id: ""
	I1202 12:52:55.533920   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.533931   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:55.533942   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:55.533998   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:55.566686   59162 cri.go:89] found id: ""
	I1202 12:52:55.566715   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.566725   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:55.566736   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:55.566790   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:55.598330   59162 cri.go:89] found id: ""
	I1202 12:52:55.598357   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.598367   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:55.598374   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:55.598429   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:55.630648   59162 cri.go:89] found id: ""
	I1202 12:52:55.630676   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.630686   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:55.630694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:55.630755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:55.664611   59162 cri.go:89] found id: ""
	I1202 12:52:55.664633   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.664640   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:55.664645   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:55.664687   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:55.697762   59162 cri.go:89] found id: ""
	I1202 12:52:55.697789   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.697797   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:55.697803   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:55.697853   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:55.735239   59162 cri.go:89] found id: ""
	I1202 12:52:55.735263   59162 logs.go:282] 0 containers: []
	W1202 12:52:55.735271   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:55.735279   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:55.735292   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:55.805187   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:55.805217   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:55.805233   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:55.888420   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:55.888452   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:52:55.927535   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:55.927561   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:55.976883   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:55.976909   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:54.152753   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:56.154364   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.654202   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:54.968436   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:58.036631   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:52:56.750816   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.752427   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:52:58.490700   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:52:58.504983   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:52:58.505053   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:52:58.541332   59162 cri.go:89] found id: ""
	I1202 12:52:58.541352   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.541359   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:52:58.541365   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:52:58.541409   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:52:58.579437   59162 cri.go:89] found id: ""
	I1202 12:52:58.579459   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.579466   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:52:58.579472   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:52:58.579521   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:52:58.617374   59162 cri.go:89] found id: ""
	I1202 12:52:58.617406   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.617417   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:52:58.617425   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:52:58.617486   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:52:58.653242   59162 cri.go:89] found id: ""
	I1202 12:52:58.653269   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.653280   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:52:58.653287   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:52:58.653345   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:52:58.686171   59162 cri.go:89] found id: ""
	I1202 12:52:58.686201   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.686210   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:52:58.686215   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:52:58.686262   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:52:58.719934   59162 cri.go:89] found id: ""
	I1202 12:52:58.719956   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.719966   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:52:58.719974   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:52:58.720030   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:52:58.759587   59162 cri.go:89] found id: ""
	I1202 12:52:58.759610   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.759619   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:52:58.759626   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:52:58.759678   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:52:58.790885   59162 cri.go:89] found id: ""
	I1202 12:52:58.790908   59162 logs.go:282] 0 containers: []
	W1202 12:52:58.790915   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:52:58.790922   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:52:58.790934   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:52:58.840192   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:52:58.840220   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:52:58.853639   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:52:58.853663   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:52:58.924643   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:52:58.924669   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:52:58.924679   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:52:59.013916   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:52:59.013945   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:01.552305   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:01.565577   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:01.565642   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:01.598261   59162 cri.go:89] found id: ""
	I1202 12:53:01.598294   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.598304   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:01.598310   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:01.598377   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:01.631527   59162 cri.go:89] found id: ""
	I1202 12:53:01.631556   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.631565   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:01.631570   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:01.631631   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:01.670788   59162 cri.go:89] found id: ""
	I1202 12:53:01.670812   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.670820   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:01.670826   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:01.670880   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:01.708801   59162 cri.go:89] found id: ""
	I1202 12:53:01.708828   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.708838   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:01.708846   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:01.708914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:01.746053   59162 cri.go:89] found id: ""
	I1202 12:53:01.746074   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.746083   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:01.746120   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:01.746184   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:01.780873   59162 cri.go:89] found id: ""
	I1202 12:53:01.780894   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.780901   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:01.780907   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:01.780951   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:01.817234   59162 cri.go:89] found id: ""
	I1202 12:53:01.817259   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.817269   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:01.817276   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:01.817335   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:01.850277   59162 cri.go:89] found id: ""
	I1202 12:53:01.850302   59162 logs.go:282] 0 containers: []
	W1202 12:53:01.850317   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:01.850327   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:01.850342   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:01.933014   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:01.933055   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:01.971533   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:01.971562   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:02.020280   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:02.020311   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:02.034786   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:02.034814   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:02.104013   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:01.152305   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:03.153925   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:01.250308   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:03.250937   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:05.751259   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:04.604595   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:04.618004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:04.618057   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:04.651388   59162 cri.go:89] found id: ""
	I1202 12:53:04.651414   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.651428   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:04.651436   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:04.651495   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:04.686973   59162 cri.go:89] found id: ""
	I1202 12:53:04.686998   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.687005   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:04.687019   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:04.687063   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:04.720630   59162 cri.go:89] found id: ""
	I1202 12:53:04.720654   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.720661   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:04.720667   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:04.720724   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:04.754657   59162 cri.go:89] found id: ""
	I1202 12:53:04.754682   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.754689   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:04.754694   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:04.754746   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:04.787583   59162 cri.go:89] found id: ""
	I1202 12:53:04.787611   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.787621   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:04.787628   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:04.787686   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:04.818962   59162 cri.go:89] found id: ""
	I1202 12:53:04.818988   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.818999   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:04.819006   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:04.819059   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:04.852015   59162 cri.go:89] found id: ""
	I1202 12:53:04.852035   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.852042   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:04.852047   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:04.852097   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:04.886272   59162 cri.go:89] found id: ""
	I1202 12:53:04.886294   59162 logs.go:282] 0 containers: []
	W1202 12:53:04.886301   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:04.886309   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:04.886320   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:04.934682   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:04.934712   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:04.947889   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:04.947911   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:05.018970   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:05.018995   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:05.019010   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:05.098203   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:05.098233   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:07.637320   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:07.650643   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:07.650706   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:07.683468   59162 cri.go:89] found id: ""
	I1202 12:53:07.683491   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.683499   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:07.683504   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:07.683565   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:07.719765   59162 cri.go:89] found id: ""
	I1202 12:53:07.719792   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.719799   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:07.719805   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:07.719855   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:07.760939   59162 cri.go:89] found id: ""
	I1202 12:53:07.760986   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.760996   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:07.761004   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:07.761066   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:07.799175   59162 cri.go:89] found id: ""
	I1202 12:53:07.799219   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.799231   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:07.799239   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:07.799300   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:07.831957   59162 cri.go:89] found id: ""
	I1202 12:53:07.831987   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.831999   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:07.832007   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:07.832067   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:07.865982   59162 cri.go:89] found id: ""
	I1202 12:53:07.866008   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.866015   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:07.866022   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:07.866080   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:07.903443   59162 cri.go:89] found id: ""
	I1202 12:53:07.903467   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.903477   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:07.903484   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:07.903541   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:07.939268   59162 cri.go:89] found id: ""
	I1202 12:53:07.939293   59162 logs.go:282] 0 containers: []
	W1202 12:53:07.939300   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:07.939310   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:07.939324   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:07.952959   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:07.952984   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:08.039178   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:08.039207   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:08.039223   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:08.121432   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:08.121469   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:08.164739   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:08.164767   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:05.652537   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:07.652894   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:04.116377   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:07.188477   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:08.250489   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:10.250657   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:10.718599   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:10.731079   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:10.731154   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:10.767605   59162 cri.go:89] found id: ""
	I1202 12:53:10.767626   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.767633   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:10.767639   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:10.767689   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:10.800464   59162 cri.go:89] found id: ""
	I1202 12:53:10.800483   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.800491   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:10.800496   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:10.800554   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:10.840808   59162 cri.go:89] found id: ""
	I1202 12:53:10.840836   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.840853   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:10.840859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:10.840922   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:10.877653   59162 cri.go:89] found id: ""
	I1202 12:53:10.877681   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.877690   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:10.877698   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:10.877755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:10.915849   59162 cri.go:89] found id: ""
	I1202 12:53:10.915873   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.915883   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:10.915891   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:10.915953   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:10.948652   59162 cri.go:89] found id: ""
	I1202 12:53:10.948680   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.948691   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:10.948697   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:10.948755   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:10.983126   59162 cri.go:89] found id: ""
	I1202 12:53:10.983154   59162 logs.go:282] 0 containers: []
	W1202 12:53:10.983165   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:10.983172   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:10.983232   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:11.015350   59162 cri.go:89] found id: ""
	I1202 12:53:11.015378   59162 logs.go:282] 0 containers: []
	W1202 12:53:11.015390   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:11.015400   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:11.015414   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:11.028713   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:11.028737   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:11.095904   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:11.095932   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:11.095950   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:11.179078   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:11.179114   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:11.216075   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:11.216106   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:09.653482   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:12.152117   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:13.272450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:12.750358   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:14.751316   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:13.774975   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:13.787745   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:13.787804   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:13.821793   59162 cri.go:89] found id: ""
	I1202 12:53:13.821824   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.821834   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:13.821840   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:13.821885   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:13.854831   59162 cri.go:89] found id: ""
	I1202 12:53:13.854855   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.854864   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:13.854871   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:13.854925   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:13.885113   59162 cri.go:89] found id: ""
	I1202 12:53:13.885142   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.885149   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:13.885155   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:13.885201   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:13.915811   59162 cri.go:89] found id: ""
	I1202 12:53:13.915841   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.915851   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:13.915859   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:13.915914   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:13.948908   59162 cri.go:89] found id: ""
	I1202 12:53:13.948936   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.948946   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:13.948953   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:13.949016   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:13.986502   59162 cri.go:89] found id: ""
	I1202 12:53:13.986531   59162 logs.go:282] 0 containers: []
	W1202 12:53:13.986540   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:13.986548   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:13.986607   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:14.018182   59162 cri.go:89] found id: ""
	I1202 12:53:14.018210   59162 logs.go:282] 0 containers: []
	W1202 12:53:14.018221   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:14.018229   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:14.018287   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:14.054185   59162 cri.go:89] found id: ""
	I1202 12:53:14.054221   59162 logs.go:282] 0 containers: []
	W1202 12:53:14.054233   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:14.054244   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:14.054272   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:14.131353   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:14.131381   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:14.131402   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:14.212787   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:14.212822   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:14.254043   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:14.254073   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:14.309591   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:14.309620   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:16.824827   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:16.838150   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:16.838210   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:16.871550   59162 cri.go:89] found id: ""
	I1202 12:53:16.871570   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.871577   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:16.871582   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:16.871625   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:16.908736   59162 cri.go:89] found id: ""
	I1202 12:53:16.908766   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.908775   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:16.908781   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:16.908844   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:16.941404   59162 cri.go:89] found id: ""
	I1202 12:53:16.941427   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.941437   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:16.941444   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:16.941500   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:16.971984   59162 cri.go:89] found id: ""
	I1202 12:53:16.972011   59162 logs.go:282] 0 containers: []
	W1202 12:53:16.972023   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:16.972030   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:16.972079   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:17.004573   59162 cri.go:89] found id: ""
	I1202 12:53:17.004596   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.004607   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:17.004614   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:17.004661   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:17.037171   59162 cri.go:89] found id: ""
	I1202 12:53:17.037199   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.037210   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:17.037218   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:17.037271   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:17.070862   59162 cri.go:89] found id: ""
	I1202 12:53:17.070888   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.070899   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:17.070906   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:17.070959   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:17.102642   59162 cri.go:89] found id: ""
	I1202 12:53:17.102668   59162 logs.go:282] 0 containers: []
	W1202 12:53:17.102678   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:17.102688   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:17.102701   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:17.182590   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:17.182623   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:17.224313   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:17.224346   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:17.272831   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:17.272855   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:17.286217   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:17.286240   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:17.357274   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:14.153570   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:16.651955   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:18.654103   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:16.340429   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:17.252036   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:19.751295   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:19.858294   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:19.871731   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:19.871787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:19.906270   59162 cri.go:89] found id: ""
	I1202 12:53:19.906290   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.906297   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:19.906303   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:19.906345   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:19.937769   59162 cri.go:89] found id: ""
	I1202 12:53:19.937790   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.937797   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:19.937802   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:19.937845   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:19.971667   59162 cri.go:89] found id: ""
	I1202 12:53:19.971689   59162 logs.go:282] 0 containers: []
	W1202 12:53:19.971706   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:19.971714   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:19.971787   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:20.005434   59162 cri.go:89] found id: ""
	I1202 12:53:20.005455   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.005461   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:20.005467   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:20.005512   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:20.041817   59162 cri.go:89] found id: ""
	I1202 12:53:20.041839   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.041848   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:20.041856   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:20.041906   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:20.073923   59162 cri.go:89] found id: ""
	I1202 12:53:20.073946   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.073958   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:20.073966   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:20.074026   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:20.107360   59162 cri.go:89] found id: ""
	I1202 12:53:20.107398   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.107409   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:20.107416   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:20.107479   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:20.153919   59162 cri.go:89] found id: ""
	I1202 12:53:20.153942   59162 logs.go:282] 0 containers: []
	W1202 12:53:20.153952   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:20.153963   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:20.153977   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:20.211581   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:20.211610   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:20.227589   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:20.227615   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:20.305225   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:20.305250   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:20.305265   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:20.382674   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:20.382713   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:22.924662   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:22.940038   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:22.940101   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:22.984768   59162 cri.go:89] found id: ""
	I1202 12:53:22.984795   59162 logs.go:282] 0 containers: []
	W1202 12:53:22.984806   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:22.984815   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:22.984876   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:23.024159   59162 cri.go:89] found id: ""
	I1202 12:53:23.024180   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.024188   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:23.024194   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:23.024254   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:23.059929   59162 cri.go:89] found id: ""
	I1202 12:53:23.059948   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.059956   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:23.059961   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:23.060003   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:23.093606   59162 cri.go:89] found id: ""
	I1202 12:53:23.093627   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.093633   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:23.093639   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:23.093689   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:23.127868   59162 cri.go:89] found id: ""
	I1202 12:53:23.127893   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.127904   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:23.127910   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:23.127965   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:23.164988   59162 cri.go:89] found id: ""
	I1202 12:53:23.165006   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.165013   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:23.165018   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:23.165058   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:23.196389   59162 cri.go:89] found id: ""
	I1202 12:53:23.196412   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.196423   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:23.196430   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:23.196481   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:23.229337   59162 cri.go:89] found id: ""
	I1202 12:53:23.229358   59162 logs.go:282] 0 containers: []
	W1202 12:53:23.229366   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:23.229376   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:23.229404   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:23.284041   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:23.284066   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:23.297861   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:23.297884   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:53:21.152126   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:23.154090   58902 pod_ready.go:103] pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:22.420399   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:22.250790   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:24.252122   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	W1202 12:53:23.364113   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:23.364131   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:23.364142   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:23.446244   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:23.446273   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:25.986668   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:25.998953   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:25.999013   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:26.034844   59162 cri.go:89] found id: ""
	I1202 12:53:26.034868   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.034876   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:26.034883   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:26.034938   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:26.067050   59162 cri.go:89] found id: ""
	I1202 12:53:26.067076   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.067083   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:26.067089   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:26.067152   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:26.098705   59162 cri.go:89] found id: ""
	I1202 12:53:26.098735   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.098746   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:26.098754   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:26.098812   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:26.131283   59162 cri.go:89] found id: ""
	I1202 12:53:26.131312   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.131321   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:26.131327   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:26.131379   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:26.164905   59162 cri.go:89] found id: ""
	I1202 12:53:26.164933   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.164943   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:26.164950   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:26.165009   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:26.196691   59162 cri.go:89] found id: ""
	I1202 12:53:26.196715   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.196724   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:26.196732   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:26.196789   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:26.227341   59162 cri.go:89] found id: ""
	I1202 12:53:26.227364   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.227374   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:26.227380   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:26.227436   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:26.260569   59162 cri.go:89] found id: ""
	I1202 12:53:26.260589   59162 logs.go:282] 0 containers: []
	W1202 12:53:26.260597   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:26.260606   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:26.260619   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:26.313150   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:26.313175   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:26.327732   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:26.327762   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:26.392748   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:26.392768   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:26.392778   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:26.474456   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:26.474484   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:24.146771   58902 pod_ready.go:82] duration metric: took 4m0.000100995s for pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace to be "Ready" ...
	E1202 12:53:24.146796   58902 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-fjspk" in "kube-system" namespace to be "Ready" (will not retry!)
	I1202 12:53:24.146811   58902 pod_ready.go:39] duration metric: took 4m6.027386938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:53:24.146852   58902 kubeadm.go:597] duration metric: took 4m15.570212206s to restartPrimaryControlPlane
	W1202 12:53:24.146901   58902 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 12:53:24.146926   58902 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:53:25.492478   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:26.253906   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:28.752313   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:29.018514   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:29.032328   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:29.032457   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:29.067696   59162 cri.go:89] found id: ""
	I1202 12:53:29.067720   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.067732   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:29.067738   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:29.067794   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:29.101076   59162 cri.go:89] found id: ""
	I1202 12:53:29.101096   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.101103   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:29.101108   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:29.101150   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:29.136446   59162 cri.go:89] found id: ""
	I1202 12:53:29.136473   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.136483   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:29.136489   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:29.136552   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:29.170820   59162 cri.go:89] found id: ""
	I1202 12:53:29.170849   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.170860   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:29.170868   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:29.170931   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:29.205972   59162 cri.go:89] found id: ""
	I1202 12:53:29.206001   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.206012   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:29.206020   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:29.206086   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:29.242118   59162 cri.go:89] found id: ""
	I1202 12:53:29.242155   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.242165   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:29.242172   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:29.242222   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:29.281377   59162 cri.go:89] found id: ""
	I1202 12:53:29.281405   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.281417   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:29.281426   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:29.281487   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:29.316350   59162 cri.go:89] found id: ""
	I1202 12:53:29.316381   59162 logs.go:282] 0 containers: []
	W1202 12:53:29.316393   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:29.316404   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:29.316418   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:29.392609   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:29.392648   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:29.430777   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:29.430804   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:29.484157   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:29.484190   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:29.498434   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:29.498457   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:29.568203   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:32.069043   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:32.081796   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:53:32.081867   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:53:32.115767   59162 cri.go:89] found id: ""
	I1202 12:53:32.115789   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.115797   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:53:32.115802   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:53:32.115861   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:53:32.145962   59162 cri.go:89] found id: ""
	I1202 12:53:32.145984   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.145992   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:53:32.145999   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:53:32.146046   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:53:32.177709   59162 cri.go:89] found id: ""
	I1202 12:53:32.177734   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.177744   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:53:32.177752   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:53:32.177796   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:53:32.211897   59162 cri.go:89] found id: ""
	I1202 12:53:32.211921   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.211930   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:53:32.211937   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:53:32.211994   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:53:32.244401   59162 cri.go:89] found id: ""
	I1202 12:53:32.244425   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.244434   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:53:32.244442   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:53:32.244503   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:53:32.278097   59162 cri.go:89] found id: ""
	I1202 12:53:32.278123   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.278140   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:53:32.278151   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:53:32.278210   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:53:32.312740   59162 cri.go:89] found id: ""
	I1202 12:53:32.312774   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.312785   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:53:32.312793   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:53:32.312860   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:53:32.345849   59162 cri.go:89] found id: ""
	I1202 12:53:32.345878   59162 logs.go:282] 0 containers: []
	W1202 12:53:32.345889   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:53:32.345901   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:53:32.345917   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:53:32.395961   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:53:32.395998   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:53:32.409582   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:53:32.409609   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:53:32.473717   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:53:32.473746   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:53:32.473763   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:53:32.548547   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:53:32.548580   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:53:31.572430   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:31.251492   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:33.251616   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:35.750762   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:35.088628   59162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:53:35.102152   59162 kubeadm.go:597] duration metric: took 4m2.014751799s to restartPrimaryControlPlane
	W1202 12:53:35.102217   59162 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 12:53:35.102244   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:53:36.768528   59162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.666262663s)
	I1202 12:53:36.768601   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:53:36.783104   59162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:53:36.792966   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:53:36.802188   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:53:36.802205   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:53:36.802234   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:53:36.811253   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:53:36.811290   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:53:36.820464   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:53:36.829386   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:53:36.829426   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:53:36.838814   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:53:36.847241   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:53:36.847272   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:53:36.856295   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:53:36.864892   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:53:36.864929   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:53:36.873699   59162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:53:37.076297   59162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:53:34.644489   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:38.250676   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:40.250779   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:40.724427   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:43.796493   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:42.251341   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:44.751292   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:50.547760   58902 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.400809303s)
	I1202 12:53:50.547840   58902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:53:50.564051   58902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:53:50.573674   58902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:53:50.582945   58902 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:53:50.582965   58902 kubeadm.go:157] found existing configuration files:
	
	I1202 12:53:50.582998   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:53:50.591979   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:53:50.592030   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:53:50.601043   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:53:50.609896   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:53:50.609945   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:53:50.618918   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:53:50.627599   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:53:50.627634   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:53:50.636459   58902 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:53:50.644836   58902 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:53:50.644880   58902 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:53:50.653742   58902 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:53:50.698104   58902 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 12:53:50.698187   58902 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:53:50.811202   58902 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:53:50.811340   58902 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:53:50.811466   58902 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 12:53:50.822002   58902 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:53:47.252492   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:49.750168   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:50.823836   58902 out.go:235]   - Generating certificates and keys ...
	I1202 12:53:50.823933   58902 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:53:50.824031   58902 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:53:50.824141   58902 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:53:50.824223   58902 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:53:50.824328   58902 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:53:50.824402   58902 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:53:50.824500   58902 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:53:50.824583   58902 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:53:50.824697   58902 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:53:50.824826   58902 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:53:50.824896   58902 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:53:50.824984   58902 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:53:50.912363   58902 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:53:50.997719   58902 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 12:53:51.181182   58902 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:53:51.424413   58902 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:53:51.526033   58902 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:53:51.526547   58902 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:53:51.528947   58902 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:53:51.530665   58902 out.go:235]   - Booting up control plane ...
	I1202 12:53:51.530761   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:53:51.530862   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:53:51.530946   58902 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:53:51.551867   58902 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:53:51.557869   58902 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:53:51.557960   58902 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:53:51.690048   58902 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 12:53:51.690190   58902 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 12:53:52.190616   58902 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.56624ms
	I1202 12:53:52.190735   58902 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 12:53:49.876477   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:52.948470   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:53:51.752318   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:54.250701   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:57.192620   58902 kubeadm.go:310] [api-check] The API server is healthy after 5.001974319s
	I1202 12:53:57.205108   58902 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 12:53:57.217398   58902 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 12:53:57.241642   58902 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 12:53:57.241842   58902 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-953044 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 12:53:57.252962   58902 kubeadm.go:310] [bootstrap-token] Using token: kqbw67.r50dkuvxntafmbtm
	I1202 12:53:57.254175   58902 out.go:235]   - Configuring RBAC rules ...
	I1202 12:53:57.254282   58902 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 12:53:57.258707   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 12:53:57.265127   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 12:53:57.268044   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 12:53:57.273630   58902 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 12:53:57.276921   58902 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 12:53:57.598936   58902 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 12:53:58.031759   58902 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 12:53:58.598943   58902 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 12:53:58.599838   58902 kubeadm.go:310] 
	I1202 12:53:58.599900   58902 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 12:53:58.599927   58902 kubeadm.go:310] 
	I1202 12:53:58.600020   58902 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 12:53:58.600031   58902 kubeadm.go:310] 
	I1202 12:53:58.600067   58902 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 12:53:58.600150   58902 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 12:53:58.600249   58902 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 12:53:58.600266   58902 kubeadm.go:310] 
	I1202 12:53:58.600343   58902 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 12:53:58.600353   58902 kubeadm.go:310] 
	I1202 12:53:58.600418   58902 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 12:53:58.600429   58902 kubeadm.go:310] 
	I1202 12:53:58.600500   58902 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 12:53:58.600602   58902 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 12:53:58.600694   58902 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 12:53:58.600704   58902 kubeadm.go:310] 
	I1202 12:53:58.600878   58902 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 12:53:58.600996   58902 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 12:53:58.601008   58902 kubeadm.go:310] 
	I1202 12:53:58.601121   58902 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kqbw67.r50dkuvxntafmbtm \
	I1202 12:53:58.601248   58902 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 12:53:58.601281   58902 kubeadm.go:310] 	--control-plane 
	I1202 12:53:58.601298   58902 kubeadm.go:310] 
	I1202 12:53:58.601437   58902 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 12:53:58.601451   58902 kubeadm.go:310] 
	I1202 12:53:58.601570   58902 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kqbw67.r50dkuvxntafmbtm \
	I1202 12:53:58.601726   58902 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 12:53:58.601878   58902 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:53:58.602090   58902 cni.go:84] Creating CNI manager for ""
	I1202 12:53:58.602108   58902 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:53:58.603597   58902 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 12:53:58.604832   58902 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 12:53:58.616597   58902 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 12:53:58.633585   58902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 12:53:58.633639   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:58.633694   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-953044 minikube.k8s.io/updated_at=2024_12_02T12_53_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=embed-certs-953044 minikube.k8s.io/primary=true
	I1202 12:53:58.843567   58902 ops.go:34] apiserver oom_adj: -16
	I1202 12:53:58.843643   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:56.252079   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:58.750596   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:53:59.344179   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:53:59.844667   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:00.343766   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:00.843808   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:01.343992   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:01.843750   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:02.344088   58902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 12:54:02.431425   58902 kubeadm.go:1113] duration metric: took 3.797838401s to wait for elevateKubeSystemPrivileges
	I1202 12:54:02.431466   58902 kubeadm.go:394] duration metric: took 4m53.907154853s to StartCluster
	I1202 12:54:02.431488   58902 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:54:02.431574   58902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:54:02.433388   58902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:54:02.433759   58902 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 12:54:02.433844   58902 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 12:54:02.433961   58902 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-953044"
	I1202 12:54:02.433979   58902 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-953044"
	I1202 12:54:02.433978   58902 config.go:182] Loaded profile config "embed-certs-953044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:54:02.433983   58902 addons.go:69] Setting default-storageclass=true in profile "embed-certs-953044"
	I1202 12:54:02.434009   58902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-953044"
	I1202 12:54:02.433983   58902 addons.go:69] Setting metrics-server=true in profile "embed-certs-953044"
	I1202 12:54:02.434082   58902 addons.go:234] Setting addon metrics-server=true in "embed-certs-953044"
	W1202 12:54:02.434090   58902 addons.go:243] addon metrics-server should already be in state true
	I1202 12:54:02.434121   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	W1202 12:54:02.433990   58902 addons.go:243] addon storage-provisioner should already be in state true
	I1202 12:54:02.434195   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	I1202 12:54:02.434500   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434544   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.434550   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434566   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.434589   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.434606   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.435408   58902 out.go:177] * Verifying Kubernetes components...
	I1202 12:54:02.436893   58902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:54:02.450113   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34447
	I1202 12:54:02.450620   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.451022   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.451047   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.451376   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.451545   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.454345   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38059
	I1202 12:54:02.454346   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I1202 12:54:02.454788   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.454832   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.455251   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.455268   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.455281   58902 addons.go:234] Setting addon default-storageclass=true in "embed-certs-953044"
	W1202 12:54:02.455303   58902 addons.go:243] addon default-storageclass should already be in state true
	I1202 12:54:02.455336   58902 host.go:66] Checking if "embed-certs-953044" exists ...
	I1202 12:54:02.455286   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.455377   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.455570   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.455696   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.455708   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.455739   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.456068   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.456085   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.456105   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.456122   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.470558   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I1202 12:54:02.470761   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I1202 12:54:02.470971   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471035   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43157
	I1202 12:54:02.471142   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471406   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.471426   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.471494   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.471620   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.471633   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.471955   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472019   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.472035   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.472110   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.472127   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472446   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.472647   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.472685   58902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:54:02.472721   58902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:54:02.474380   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.474597   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.476328   58902 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1202 12:54:02.476338   58902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 12:54:02.477992   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 12:54:02.478008   58902 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 12:54:02.478022   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.478549   58902 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 12:54:02.478567   58902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 12:54:02.478584   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.481364   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.481698   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.481725   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.481956   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.482008   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.482150   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.482274   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.482417   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.482503   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.482521   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.482785   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.483079   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.483352   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.483478   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.489285   58902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I1202 12:54:02.489644   58902 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:54:02.490064   58902 main.go:141] libmachine: Using API Version  1
	I1202 12:54:02.490085   58902 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:54:02.490346   58902 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:54:02.490510   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetState
	I1202 12:54:02.491774   58902 main.go:141] libmachine: (embed-certs-953044) Calling .DriverName
	I1202 12:54:02.491961   58902 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 12:54:02.491974   58902 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 12:54:02.491990   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHHostname
	I1202 12:54:02.494680   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.495069   58902 main.go:141] libmachine: (embed-certs-953044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:c4:51", ip: ""} in network mk-embed-certs-953044: {Iface:virbr4 ExpiryTime:2024-12-02 13:48:54 +0000 UTC Type:0 Mac:52:54:00:4a:c4:51 Iaid: IPaddr:192.168.72.203 Prefix:24 Hostname:embed-certs-953044 Clientid:01:52:54:00:4a:c4:51}
	I1202 12:54:02.495098   58902 main.go:141] libmachine: (embed-certs-953044) DBG | domain embed-certs-953044 has defined IP address 192.168.72.203 and MAC address 52:54:00:4a:c4:51 in network mk-embed-certs-953044
	I1202 12:54:02.495259   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHPort
	I1202 12:54:02.495392   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHKeyPath
	I1202 12:54:02.495582   58902 main.go:141] libmachine: (embed-certs-953044) Calling .GetSSHUsername
	I1202 12:54:02.495700   58902 sshutil.go:53] new ssh client: &{IP:192.168.72.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/embed-certs-953044/id_rsa Username:docker}
	I1202 12:54:02.626584   58902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:54:02.650914   58902 node_ready.go:35] waiting up to 6m0s for node "embed-certs-953044" to be "Ready" ...
	I1202 12:54:02.658909   58902 node_ready.go:49] node "embed-certs-953044" has status "Ready":"True"
	I1202 12:54:02.658931   58902 node_ready.go:38] duration metric: took 7.986729ms for node "embed-certs-953044" to be "Ready" ...
	I1202 12:54:02.658939   58902 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:02.663878   58902 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:02.708572   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 12:54:02.711794   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 12:54:02.711813   58902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1202 12:54:02.729787   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 12:54:02.760573   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 12:54:02.760595   58902 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 12:54:02.814731   58902 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 12:54:02.814756   58902 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 12:54:02.867045   58902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 12:54:03.549497   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.549532   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.549914   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.549970   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.549999   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.550010   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.550032   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.550256   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.550360   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.550336   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.551311   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.551333   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.551629   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.551591   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.551670   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.551686   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.551694   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.551907   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.552278   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.552295   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.577295   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.577322   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.577618   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.577631   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.577647   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.835721   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.835752   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.836073   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.836092   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.836108   58902 main.go:141] libmachine: Making call to close driver server
	I1202 12:54:03.836118   58902 main.go:141] libmachine: (embed-certs-953044) Calling .Close
	I1202 12:54:03.836460   58902 main.go:141] libmachine: Successfully made call to close driver server
	I1202 12:54:03.836478   58902 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 12:54:03.836489   58902 addons.go:475] Verifying addon metrics-server=true in "embed-certs-953044"
	I1202 12:54:03.836492   58902 main.go:141] libmachine: (embed-certs-953044) DBG | Closing plugin on server side
	I1202 12:54:03.838858   58902 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1202 12:54:03.840263   58902 addons.go:510] duration metric: took 1.406440873s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1202 12:53:59.032460   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:02.100433   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:01.251084   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:03.252024   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:05.752273   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:04.669768   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:07.171770   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:08.180411   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:08.251624   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:10.751482   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:09.670413   58902 pod_ready.go:103] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:10.669602   58902 pod_ready.go:93] pod "etcd-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.669624   58902 pod_ready.go:82] duration metric: took 8.00571576s for pod "etcd-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.669634   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.674276   58902 pod_ready.go:93] pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.674293   58902 pod_ready.go:82] duration metric: took 4.652882ms for pod "kube-apiserver-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.674301   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.678330   58902 pod_ready.go:93] pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:10.678346   58902 pod_ready.go:82] duration metric: took 4.037883ms for pod "kube-controller-manager-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:10.678354   58902 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:12.184565   58902 pod_ready.go:93] pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace has status "Ready":"True"
	I1202 12:54:12.184591   58902 pod_ready.go:82] duration metric: took 1.506229118s for pod "kube-scheduler-embed-certs-953044" in "kube-system" namespace to be "Ready" ...
	I1202 12:54:12.184601   58902 pod_ready.go:39] duration metric: took 9.525652092s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:12.184622   58902 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:54:12.184683   58902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:54:12.204339   58902 api_server.go:72] duration metric: took 9.770541552s to wait for apiserver process to appear ...
	I1202 12:54:12.204361   58902 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:54:12.204383   58902 api_server.go:253] Checking apiserver healthz at https://192.168.72.203:8443/healthz ...
	I1202 12:54:12.208020   58902 api_server.go:279] https://192.168.72.203:8443/healthz returned 200:
	ok
	I1202 12:54:12.208957   58902 api_server.go:141] control plane version: v1.31.2
	I1202 12:54:12.208975   58902 api_server.go:131] duration metric: took 4.608337ms to wait for apiserver health ...
	I1202 12:54:12.208982   58902 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:54:12.215103   58902 system_pods.go:59] 9 kube-system pods found
	I1202 12:54:12.215123   58902 system_pods.go:61] "coredns-7c65d6cfc9-fwt6z" [06a23976-b261-4baa-8f66-e966addfb41a] Running
	I1202 12:54:12.215128   58902 system_pods.go:61] "coredns-7c65d6cfc9-tm4ct" [109d2f58-c2c8-4bf0-8232-fdbeb078305d] Running
	I1202 12:54:12.215132   58902 system_pods.go:61] "etcd-embed-certs-953044" [8f6fcd39-810b-45a4-91f0-9d449964beb1] Running
	I1202 12:54:12.215135   58902 system_pods.go:61] "kube-apiserver-embed-certs-953044" [976f7d34-0970-43b6-9b2a-18c2d7be0d63] Running
	I1202 12:54:12.215145   58902 system_pods.go:61] "kube-controller-manager-embed-certs-953044" [49c46bb7-8936-4c00-8764-56ae847aab27] Running
	I1202 12:54:12.215150   58902 system_pods.go:61] "kube-proxy-kg4z6" [c6b74e9c-47e4-4b1c-a219-685cc119219b] Running
	I1202 12:54:12.215157   58902 system_pods.go:61] "kube-scheduler-embed-certs-953044" [7940c2f7-a1b6-4c5b-879d-d72c3776ffbb] Running
	I1202 12:54:12.215171   58902 system_pods.go:61] "metrics-server-6867b74b74-fwhvq" [e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:12.215181   58902 system_pods.go:61] "storage-provisioner" [35fdd473-75b2-41d6-95bf-1bcab189dae5] Running
	I1202 12:54:12.215190   58902 system_pods.go:74] duration metric: took 6.203134ms to wait for pod list to return data ...
	I1202 12:54:12.215198   58902 default_sa.go:34] waiting for default service account to be created ...
	I1202 12:54:12.217406   58902 default_sa.go:45] found service account: "default"
	I1202 12:54:12.217421   58902 default_sa.go:55] duration metric: took 2.217536ms for default service account to be created ...
	I1202 12:54:12.217427   58902 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 12:54:12.221673   58902 system_pods.go:86] 9 kube-system pods found
	I1202 12:54:12.221690   58902 system_pods.go:89] "coredns-7c65d6cfc9-fwt6z" [06a23976-b261-4baa-8f66-e966addfb41a] Running
	I1202 12:54:12.221695   58902 system_pods.go:89] "coredns-7c65d6cfc9-tm4ct" [109d2f58-c2c8-4bf0-8232-fdbeb078305d] Running
	I1202 12:54:12.221701   58902 system_pods.go:89] "etcd-embed-certs-953044" [8f6fcd39-810b-45a4-91f0-9d449964beb1] Running
	I1202 12:54:12.221705   58902 system_pods.go:89] "kube-apiserver-embed-certs-953044" [976f7d34-0970-43b6-9b2a-18c2d7be0d63] Running
	I1202 12:54:12.221709   58902 system_pods.go:89] "kube-controller-manager-embed-certs-953044" [49c46bb7-8936-4c00-8764-56ae847aab27] Running
	I1202 12:54:12.221712   58902 system_pods.go:89] "kube-proxy-kg4z6" [c6b74e9c-47e4-4b1c-a219-685cc119219b] Running
	I1202 12:54:12.221716   58902 system_pods.go:89] "kube-scheduler-embed-certs-953044" [7940c2f7-a1b6-4c5b-879d-d72c3776ffbb] Running
	I1202 12:54:12.221724   58902 system_pods.go:89] "metrics-server-6867b74b74-fwhvq" [e5ad6b6a-5e6f-4a7f-afa6-9cc17a40114f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:12.221729   58902 system_pods.go:89] "storage-provisioner" [35fdd473-75b2-41d6-95bf-1bcab189dae5] Running
	I1202 12:54:12.221736   58902 system_pods.go:126] duration metric: took 4.304449ms to wait for k8s-apps to be running ...
	I1202 12:54:12.221745   58902 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 12:54:12.221780   58902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:54:12.238687   58902 system_svc.go:56] duration metric: took 16.934566ms WaitForService to wait for kubelet
	I1202 12:54:12.238707   58902 kubeadm.go:582] duration metric: took 9.804914519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:54:12.238722   58902 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:54:12.268746   58902 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:54:12.268776   58902 node_conditions.go:123] node cpu capacity is 2
	I1202 12:54:12.268790   58902 node_conditions.go:105] duration metric: took 30.063656ms to run NodePressure ...
	I1202 12:54:12.268802   58902 start.go:241] waiting for startup goroutines ...
	I1202 12:54:12.268813   58902 start.go:246] waiting for cluster config update ...
	I1202 12:54:12.268828   58902 start.go:255] writing updated cluster config ...
	I1202 12:54:12.269149   58902 ssh_runner.go:195] Run: rm -f paused
	I1202 12:54:12.315523   58902 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 12:54:12.317559   58902 out.go:177] * Done! kubectl is now configured to use "embed-certs-953044" cluster and "default" namespace by default
	I1202 12:54:11.252465   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:13.251203   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:15.251601   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:17.332421   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:17.751347   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:20.252108   57877 pod_ready.go:103] pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace has status "Ready":"False"
	I1202 12:54:20.404508   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:21.252458   57877 pod_ready.go:82] duration metric: took 4m0.007570673s for pod "metrics-server-6867b74b74-sn7tq" in "kube-system" namespace to be "Ready" ...
	E1202 12:54:21.252479   57877 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1202 12:54:21.252487   57877 pod_ready.go:39] duration metric: took 4m2.808635222s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:54:21.252501   57877 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:54:21.252524   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:21.252565   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:21.311644   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:21.311663   57877 cri.go:89] found id: ""
	I1202 12:54:21.311670   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:21.311712   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.316826   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:21.316881   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:21.366930   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:21.366951   57877 cri.go:89] found id: ""
	I1202 12:54:21.366959   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:21.366999   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.371132   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:21.371194   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:21.405238   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:21.405261   57877 cri.go:89] found id: ""
	I1202 12:54:21.405270   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:21.405312   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.409631   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:21.409687   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:21.444516   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:21.444535   57877 cri.go:89] found id: ""
	I1202 12:54:21.444542   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:21.444583   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.448736   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:21.448796   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:21.485458   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:21.485484   57877 cri.go:89] found id: ""
	I1202 12:54:21.485494   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:21.485546   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.489882   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:21.489953   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:21.525951   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:21.525971   57877 cri.go:89] found id: ""
	I1202 12:54:21.525978   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:21.526028   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.530141   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:21.530186   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:21.564886   57877 cri.go:89] found id: ""
	I1202 12:54:21.564909   57877 logs.go:282] 0 containers: []
	W1202 12:54:21.564920   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:21.564928   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:21.564981   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:21.601560   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:21.601585   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:21.601593   57877 cri.go:89] found id: ""
	I1202 12:54:21.601603   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:21.601660   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.605710   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:21.609870   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:21.609892   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:21.645558   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:21.645581   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:21.680733   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:21.680764   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:21.731429   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:21.731452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:21.764658   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:21.764680   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:22.249475   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:22.249511   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:22.305127   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:22.305162   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:22.369496   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:22.369528   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:22.384486   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:22.384510   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:22.425402   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:22.425424   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:22.463801   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:22.463828   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:22.507022   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:22.507048   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:22.638422   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:22.638452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:25.190880   57877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:54:25.206797   57877 api_server.go:72] duration metric: took 4m14.027370187s to wait for apiserver process to appear ...
	I1202 12:54:25.206823   57877 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:54:25.206866   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:25.206924   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:25.241643   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:25.241669   57877 cri.go:89] found id: ""
	I1202 12:54:25.241680   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:25.241734   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.245997   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:25.246037   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:25.290955   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:25.290973   57877 cri.go:89] found id: ""
	I1202 12:54:25.290980   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:25.291029   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.295284   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:25.295329   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:25.333254   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:25.333275   57877 cri.go:89] found id: ""
	I1202 12:54:25.333284   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:25.333328   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.337649   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:25.337698   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:25.371662   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:25.371682   57877 cri.go:89] found id: ""
	I1202 12:54:25.371691   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:25.371739   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.376026   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:25.376075   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:25.411223   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:25.411238   57877 cri.go:89] found id: ""
	I1202 12:54:25.411245   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:25.411287   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.415307   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:25.415351   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:25.451008   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:25.451027   57877 cri.go:89] found id: ""
	I1202 12:54:25.451035   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:25.451089   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.455681   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:25.455727   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:25.499293   57877 cri.go:89] found id: ""
	I1202 12:54:25.499315   57877 logs.go:282] 0 containers: []
	W1202 12:54:25.499325   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:25.499332   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:25.499377   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:25.533874   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:25.533896   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:25.533903   57877 cri.go:89] found id: ""
	I1202 12:54:25.533912   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:25.533961   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.537993   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:25.541881   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:25.541899   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:25.645488   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:25.645512   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:25.683783   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:25.683807   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:26.120334   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:26.120367   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:26.484425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:26.190493   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:26.190521   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:26.235397   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:26.235421   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:26.285411   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:26.285452   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:26.331807   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:26.331836   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:26.374437   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:26.374461   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:26.436459   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:26.436487   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:26.472126   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:26.472162   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:26.504819   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:26.504840   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:26.518789   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:26.518821   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:29.069521   57877 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I1202 12:54:29.074072   57877 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I1202 12:54:29.075022   57877 api_server.go:141] control plane version: v1.31.2
	I1202 12:54:29.075041   57877 api_server.go:131] duration metric: took 3.868210222s to wait for apiserver health ...
	I1202 12:54:29.075048   57877 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:54:29.075069   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:54:29.075112   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:54:29.110715   57877 cri.go:89] found id: "d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:29.110735   57877 cri.go:89] found id: ""
	I1202 12:54:29.110742   57877 logs.go:282] 1 containers: [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7]
	I1202 12:54:29.110790   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.114994   57877 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:54:29.115040   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:54:29.150431   57877 cri.go:89] found id: "460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:29.150459   57877 cri.go:89] found id: ""
	I1202 12:54:29.150468   57877 logs.go:282] 1 containers: [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac]
	I1202 12:54:29.150525   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.154909   57877 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:54:29.154967   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:54:29.198139   57877 cri.go:89] found id: "7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:29.198162   57877 cri.go:89] found id: ""
	I1202 12:54:29.198172   57877 logs.go:282] 1 containers: [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35]
	I1202 12:54:29.198224   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.202969   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:54:29.203031   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:54:29.243771   57877 cri.go:89] found id: "0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:29.243795   57877 cri.go:89] found id: ""
	I1202 12:54:29.243802   57877 logs.go:282] 1 containers: [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4]
	I1202 12:54:29.243843   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.248039   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:54:29.248106   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:54:29.286473   57877 cri.go:89] found id: "15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:29.286492   57877 cri.go:89] found id: ""
	I1202 12:54:29.286498   57877 logs.go:282] 1 containers: [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85]
	I1202 12:54:29.286538   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.290543   57877 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:54:29.290590   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:54:29.327899   57877 cri.go:89] found id: "316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:29.327916   57877 cri.go:89] found id: ""
	I1202 12:54:29.327922   57877 logs.go:282] 1 containers: [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14]
	I1202 12:54:29.327961   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.332516   57877 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:54:29.332571   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:54:29.368204   57877 cri.go:89] found id: ""
	I1202 12:54:29.368236   57877 logs.go:282] 0 containers: []
	W1202 12:54:29.368247   57877 logs.go:284] No container was found matching "kindnet"
	I1202 12:54:29.368255   57877 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 12:54:29.368301   57877 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 12:54:29.407333   57877 cri.go:89] found id: "b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:29.407358   57877 cri.go:89] found id: "ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:29.407364   57877 cri.go:89] found id: ""
	I1202 12:54:29.407372   57877 logs.go:282] 2 containers: [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c]
	I1202 12:54:29.407425   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.412153   57877 ssh_runner.go:195] Run: which crictl
	I1202 12:54:29.416525   57877 logs.go:123] Gathering logs for kube-scheduler [0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4] ...
	I1202 12:54:29.416548   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c490584031d2849bcc6f1b170371a059f37eee8d2847e5e0b9a3298c9ef2ba4"
	I1202 12:54:29.457360   57877 logs.go:123] Gathering logs for kube-proxy [15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85] ...
	I1202 12:54:29.457394   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15d09a46ff0416b037c807607c25ca443ee6d3d69c1400a272a636e704b94b85"
	I1202 12:54:29.495662   57877 logs.go:123] Gathering logs for kube-controller-manager [316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14] ...
	I1202 12:54:29.495691   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 316b371ddf0b0ebe5700ba7ab28b3fa1fcbc6bffe493ab9dbfe6012b79636e14"
	I1202 12:54:29.549304   57877 logs.go:123] Gathering logs for storage-provisioner [b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f] ...
	I1202 12:54:29.549331   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b120e768c4ec7ef4d13e0affc7ea0ff1797c82c917341fc27a7cdeff9d8fe03f"
	I1202 12:54:29.585693   57877 logs.go:123] Gathering logs for storage-provisioner [ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c] ...
	I1202 12:54:29.585718   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff4595631eef7311552881c4613899997225e36bfdd20bcc3a6a39ab5b25c20c"
	I1202 12:54:29.621888   57877 logs.go:123] Gathering logs for container status ...
	I1202 12:54:29.621912   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 12:54:29.670118   57877 logs.go:123] Gathering logs for dmesg ...
	I1202 12:54:29.670153   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:54:29.685833   57877 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:54:29.685855   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 12:54:29.792525   57877 logs.go:123] Gathering logs for etcd [460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac] ...
	I1202 12:54:29.792555   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 460259371c9778ae9a03ce9a57500039cd9f91649a684361219c0cc8683942ac"
	I1202 12:54:29.837090   57877 logs.go:123] Gathering logs for coredns [7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35] ...
	I1202 12:54:29.837138   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7db2e67ce7bdddb3162db8042b386ef259f09f1455d602b0e6a67bc031641b35"
	I1202 12:54:29.872862   57877 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:54:29.872893   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:54:30.228483   57877 logs.go:123] Gathering logs for kubelet ...
	I1202 12:54:30.228523   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:54:30.298252   57877 logs.go:123] Gathering logs for kube-apiserver [d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7] ...
	I1202 12:54:30.298285   57877 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8d62b779a876fcd17dd88babe4769419ec797703eb9b8fb62232d2e370735d7"
	I1202 12:54:32.851536   57877 system_pods.go:59] 8 kube-system pods found
	I1202 12:54:32.851562   57877 system_pods.go:61] "coredns-7c65d6cfc9-cvfc9" [f88088d1-7d48-498a-8251-f3a9ff436583] Running
	I1202 12:54:32.851567   57877 system_pods.go:61] "etcd-no-preload-658679" [950bf61a-2e04-43f9-805b-9cb98708d604] Running
	I1202 12:54:32.851571   57877 system_pods.go:61] "kube-apiserver-no-preload-658679" [3d4bfb53-da01-4973-bb3e-263a6d68463c] Running
	I1202 12:54:32.851574   57877 system_pods.go:61] "kube-controller-manager-no-preload-658679" [17db9644-06c4-4471-9186-e4de3efc7575] Running
	I1202 12:54:32.851577   57877 system_pods.go:61] "kube-proxy-2xf6j" [477778b7-12f0-4055-a583-edbf84c1a635] Running
	I1202 12:54:32.851580   57877 system_pods.go:61] "kube-scheduler-no-preload-658679" [617f769c-0ae5-4942-80c5-f85fafbc389e] Running
	I1202 12:54:32.851586   57877 system_pods.go:61] "metrics-server-6867b74b74-sn7tq" [8171d626-7036-4585-a967-8ff54f00cfc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:32.851590   57877 system_pods.go:61] "storage-provisioner" [5736f43f-6d15-41de-856a-048887f08742] Running
	I1202 12:54:32.851597   57877 system_pods.go:74] duration metric: took 3.776542886s to wait for pod list to return data ...
	I1202 12:54:32.851604   57877 default_sa.go:34] waiting for default service account to be created ...
	I1202 12:54:32.853911   57877 default_sa.go:45] found service account: "default"
	I1202 12:54:32.853928   57877 default_sa.go:55] duration metric: took 2.318516ms for default service account to be created ...
	I1202 12:54:32.853935   57877 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 12:54:32.858485   57877 system_pods.go:86] 8 kube-system pods found
	I1202 12:54:32.858508   57877 system_pods.go:89] "coredns-7c65d6cfc9-cvfc9" [f88088d1-7d48-498a-8251-f3a9ff436583] Running
	I1202 12:54:32.858513   57877 system_pods.go:89] "etcd-no-preload-658679" [950bf61a-2e04-43f9-805b-9cb98708d604] Running
	I1202 12:54:32.858519   57877 system_pods.go:89] "kube-apiserver-no-preload-658679" [3d4bfb53-da01-4973-bb3e-263a6d68463c] Running
	I1202 12:54:32.858523   57877 system_pods.go:89] "kube-controller-manager-no-preload-658679" [17db9644-06c4-4471-9186-e4de3efc7575] Running
	I1202 12:54:32.858526   57877 system_pods.go:89] "kube-proxy-2xf6j" [477778b7-12f0-4055-a583-edbf84c1a635] Running
	I1202 12:54:32.858530   57877 system_pods.go:89] "kube-scheduler-no-preload-658679" [617f769c-0ae5-4942-80c5-f85fafbc389e] Running
	I1202 12:54:32.858536   57877 system_pods.go:89] "metrics-server-6867b74b74-sn7tq" [8171d626-7036-4585-a967-8ff54f00cfc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:54:32.858540   57877 system_pods.go:89] "storage-provisioner" [5736f43f-6d15-41de-856a-048887f08742] Running
	I1202 12:54:32.858547   57877 system_pods.go:126] duration metric: took 4.607096ms to wait for k8s-apps to be running ...
	I1202 12:54:32.858555   57877 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 12:54:32.858592   57877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:54:32.874267   57877 system_svc.go:56] duration metric: took 15.704013ms WaitForService to wait for kubelet
	I1202 12:54:32.874293   57877 kubeadm.go:582] duration metric: took 4m21.694870267s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 12:54:32.874311   57877 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:54:32.877737   57877 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:54:32.877757   57877 node_conditions.go:123] node cpu capacity is 2
	I1202 12:54:32.877768   57877 node_conditions.go:105] duration metric: took 3.452076ms to run NodePressure ...
	I1202 12:54:32.877782   57877 start.go:241] waiting for startup goroutines ...
	I1202 12:54:32.877791   57877 start.go:246] waiting for cluster config update ...
	I1202 12:54:32.877807   57877 start.go:255] writing updated cluster config ...
	I1202 12:54:32.878129   57877 ssh_runner.go:195] Run: rm -f paused
	I1202 12:54:32.926190   57877 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 12:54:32.927894   57877 out.go:177] * Done! kubectl is now configured to use "no-preload-658679" cluster and "default" namespace by default
	I1202 12:54:29.556420   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:35.636450   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:38.708454   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:44.788462   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:47.860484   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:53.940448   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:54:57.012536   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:03.092433   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:06.164483   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:12.244464   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:15.316647   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:21.396479   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:24.468584   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:32.968600   59162 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:55:32.968731   59162 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:55:32.970229   59162 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:55:32.970291   59162 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:55:32.970394   59162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:55:32.970513   59162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:55:32.970629   59162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:55:32.970717   59162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:55:32.972396   59162 out.go:235]   - Generating certificates and keys ...
	I1202 12:55:32.972491   59162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:55:32.972577   59162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:55:32.972734   59162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:55:32.972823   59162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:55:32.972926   59162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:55:32.973006   59162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:55:32.973108   59162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:55:32.973192   59162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:55:32.973318   59162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:55:32.973429   59162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:55:32.973501   59162 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:55:32.973594   59162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:55:32.973658   59162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:55:32.973722   59162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:55:32.973819   59162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:55:32.973903   59162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:55:32.974041   59162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:55:32.974157   59162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:55:32.974206   59162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:55:32.974301   59162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:55:32.976508   59162 out.go:235]   - Booting up control plane ...
	I1202 12:55:32.976620   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:55:32.976741   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:55:32.976842   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:55:32.976957   59162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:55:32.977191   59162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:55:32.977281   59162 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:55:32.977342   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.977505   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.977579   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.977795   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.977906   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978091   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978174   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978394   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978497   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:55:32.978743   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:55:32.978756   59162 kubeadm.go:310] 
	I1202 12:55:32.978801   59162 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:55:32.978859   59162 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:55:32.978868   59162 kubeadm.go:310] 
	I1202 12:55:32.978914   59162 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:55:32.978961   59162 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:55:32.979078   59162 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:55:32.979088   59162 kubeadm.go:310] 
	I1202 12:55:32.979230   59162 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:55:32.979279   59162 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:55:32.979337   59162 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:55:32.979346   59162 kubeadm.go:310] 
	I1202 12:55:32.979484   59162 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:55:32.979580   59162 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:55:32.979593   59162 kubeadm.go:310] 
	I1202 12:55:32.979721   59162 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:55:32.979848   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:55:32.979968   59162 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:55:32.980059   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:55:32.980127   59162 kubeadm.go:310] 
	W1202 12:55:32.980202   59162 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 12:55:32.980267   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 12:55:33.452325   59162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:55:33.467527   59162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:55:33.477494   59162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:55:33.477522   59162 kubeadm.go:157] found existing configuration files:
	
	I1202 12:55:33.477575   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 12:55:33.487333   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:55:33.487395   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:55:33.497063   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 12:55:33.506552   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:55:33.506605   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:55:33.515968   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 12:55:33.524922   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:55:33.524956   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:55:33.534339   59162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 12:55:33.543370   59162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:55:33.543403   59162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:55:33.552970   59162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 12:55:33.624833   59162 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1202 12:55:33.624990   59162 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 12:55:33.767688   59162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 12:55:33.767796   59162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 12:55:33.767909   59162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1202 12:55:33.935314   59162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 12:55:30.548478   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:33.624512   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:33.937193   59162 out.go:235]   - Generating certificates and keys ...
	I1202 12:55:33.937290   59162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 12:55:33.937402   59162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 12:55:33.937513   59162 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 12:55:33.937620   59162 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 12:55:33.937722   59162 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 12:55:33.937791   59162 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 12:55:33.937845   59162 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 12:55:33.937896   59162 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 12:55:33.937964   59162 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 12:55:33.938028   59162 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 12:55:33.938061   59162 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 12:55:33.938108   59162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 12:55:34.167163   59162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 12:55:35.008947   59162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 12:55:35.304057   59162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 12:55:35.385824   59162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 12:55:35.409687   59162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 12:55:35.413131   59162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 12:55:35.413218   59162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 12:55:35.569508   59162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 12:55:35.571455   59162 out.go:235]   - Booting up control plane ...
	I1202 12:55:35.571596   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 12:55:35.578476   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 12:55:35.579686   59162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 12:55:35.580586   59162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 12:55:35.582869   59162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1202 12:55:39.700423   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:42.772498   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:48.852452   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:51.924490   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:55:58.004488   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:01.076456   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:07.160425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:10.228467   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:15.585409   59162 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1202 12:56:15.585530   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:15.585792   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:16.308453   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:20.586011   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:20.586257   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:19.380488   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:25.460451   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:28.532425   61173 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.154:22: connect: no route to host
	I1202 12:56:30.586783   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:30.587053   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:31.533399   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:56:31.533454   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:31.533725   61173 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653783"
	I1202 12:56:31.533749   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:31.533914   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:31.535344   61173 machine.go:96] duration metric: took 4m37.429393672s to provisionDockerMachine
	I1202 12:56:31.535386   61173 fix.go:56] duration metric: took 4m37.448634942s for fixHost
	I1202 12:56:31.535394   61173 start.go:83] releasing machines lock for "default-k8s-diff-port-653783", held for 4m37.448659715s
	W1202 12:56:31.535408   61173 start.go:714] error starting host: provision: host is not running
	W1202 12:56:31.535498   61173 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1202 12:56:31.535507   61173 start.go:729] Will try again in 5 seconds ...
	I1202 12:56:36.536323   61173 start.go:360] acquireMachinesLock for default-k8s-diff-port-653783: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 12:56:36.536434   61173 start.go:364] duration metric: took 71.395µs to acquireMachinesLock for "default-k8s-diff-port-653783"
	I1202 12:56:36.536463   61173 start.go:96] Skipping create...Using existing machine configuration
	I1202 12:56:36.536471   61173 fix.go:54] fixHost starting: 
	I1202 12:56:36.536763   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:56:36.536790   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:56:36.551482   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38745
	I1202 12:56:36.551962   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:56:36.552383   61173 main.go:141] libmachine: Using API Version  1
	I1202 12:56:36.552405   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:56:36.552689   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:56:36.552849   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:36.552968   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 12:56:36.554481   61173 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653783: state=Stopped err=<nil>
	I1202 12:56:36.554501   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	W1202 12:56:36.554652   61173 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 12:56:36.556508   61173 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653783" ...
	I1202 12:56:36.557534   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Start
	I1202 12:56:36.557690   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring networks are active...
	I1202 12:56:36.558371   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring network default is active
	I1202 12:56:36.558713   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Ensuring network mk-default-k8s-diff-port-653783 is active
	I1202 12:56:36.559023   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Getting domain xml...
	I1202 12:56:36.559739   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Creating domain...
	I1202 12:56:37.799440   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting to get IP...
	I1202 12:56:37.800397   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.800875   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.800918   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:37.800836   62278 retry.go:31] will retry after 192.811495ms: waiting for machine to come up
	I1202 12:56:37.995285   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.995743   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:37.995771   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:37.995697   62278 retry.go:31] will retry after 367.440749ms: waiting for machine to come up
	I1202 12:56:38.365229   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.365781   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.365810   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:38.365731   62278 retry.go:31] will retry after 350.196014ms: waiting for machine to come up
	I1202 12:56:38.717121   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.717650   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:38.717681   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:38.717590   62278 retry.go:31] will retry after 557.454725ms: waiting for machine to come up
	I1202 12:56:39.276110   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:39.276602   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:39.276631   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:39.276536   62278 retry.go:31] will retry after 735.275509ms: waiting for machine to come up
	I1202 12:56:40.013307   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.013865   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.013888   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:40.013833   62278 retry.go:31] will retry after 613.45623ms: waiting for machine to come up
	I1202 12:56:40.629220   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.629731   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:40.629776   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:40.629678   62278 retry.go:31] will retry after 748.849722ms: waiting for machine to come up
	I1202 12:56:41.380615   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:41.381052   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:41.381075   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:41.381023   62278 retry.go:31] will retry after 1.342160202s: waiting for machine to come up
	I1202 12:56:42.724822   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:42.725315   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:42.725355   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:42.725251   62278 retry.go:31] will retry after 1.693072543s: waiting for machine to come up
	I1202 12:56:44.420249   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:44.420700   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:44.420721   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:44.420658   62278 retry.go:31] will retry after 2.210991529s: waiting for machine to come up
	I1202 12:56:46.633486   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:46.633847   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:46.633875   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:46.633807   62278 retry.go:31] will retry after 2.622646998s: waiting for machine to come up
	I1202 12:56:50.587516   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:56:50.587731   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:56:49.257705   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:49.258232   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:49.258260   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:49.258186   62278 retry.go:31] will retry after 2.375973874s: waiting for machine to come up
	I1202 12:56:51.636055   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:51.636422   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | unable to find current IP address of domain default-k8s-diff-port-653783 in network mk-default-k8s-diff-port-653783
	I1202 12:56:51.636450   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | I1202 12:56:51.636379   62278 retry.go:31] will retry after 3.118442508s: waiting for machine to come up
	I1202 12:56:54.757260   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.757665   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Found IP for machine: 192.168.39.154
	I1202 12:56:54.757689   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has current primary IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.757697   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Reserving static IP address...
	I1202 12:56:54.758088   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653783", mac: "52:54:00:00:f6:f0", ip: "192.168.39.154"} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.758108   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Reserved static IP address: 192.168.39.154
	I1202 12:56:54.758120   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | skip adding static IP to network mk-default-k8s-diff-port-653783 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653783", mac: "52:54:00:00:f6:f0", ip: "192.168.39.154"}
	I1202 12:56:54.758134   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Getting to WaitForSSH function...
	I1202 12:56:54.758142   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Waiting for SSH to be available...
	I1202 12:56:54.760333   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.760643   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.760672   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.760789   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Using SSH client type: external
	I1202 12:56:54.760812   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa (-rw-------)
	I1202 12:56:54.760855   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.154 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 12:56:54.760880   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | About to run SSH command:
	I1202 12:56:54.760892   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | exit 0
	I1202 12:56:54.884099   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | SSH cmd err, output: <nil>: 
	I1202 12:56:54.884435   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetConfigRaw
	I1202 12:56:54.885058   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:54.887519   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.887823   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.887854   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.888041   61173 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/config.json ...
	I1202 12:56:54.888333   61173 machine.go:93] provisionDockerMachine start ...
	I1202 12:56:54.888352   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:54.888564   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:54.890754   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.891062   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:54.891090   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:54.891254   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:54.891423   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:54.891560   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:54.891709   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:54.891851   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:54.892053   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:54.892070   61173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 12:56:54.996722   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1202 12:56:54.996751   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:54.996974   61173 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653783"
	I1202 12:56:54.997004   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:54.997202   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.000026   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.000425   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.000453   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.000624   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.000810   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.000978   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.001122   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.001308   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.001540   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.001562   61173 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653783 && echo "default-k8s-diff-port-653783" | sudo tee /etc/hostname
	I1202 12:56:55.122933   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653783
	
	I1202 12:56:55.122965   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.125788   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.126182   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.126219   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.126406   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.126555   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.126718   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.126834   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.126973   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.127180   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.127206   61173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653783/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 12:56:55.242263   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 12:56:55.242291   61173 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 12:56:55.242331   61173 buildroot.go:174] setting up certificates
	I1202 12:56:55.242340   61173 provision.go:84] configureAuth start
	I1202 12:56:55.242350   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetMachineName
	I1202 12:56:55.242604   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:55.245340   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.245685   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.245719   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.245882   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.248090   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.248481   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.248512   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.248659   61173 provision.go:143] copyHostCerts
	I1202 12:56:55.248718   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 12:56:55.248733   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 12:56:55.248810   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 12:56:55.248920   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 12:56:55.248931   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 12:56:55.248965   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 12:56:55.249039   61173 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 12:56:55.249049   61173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 12:56:55.249081   61173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 12:56:55.249152   61173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653783 san=[127.0.0.1 192.168.39.154 default-k8s-diff-port-653783 localhost minikube]
	I1202 12:56:55.688887   61173 provision.go:177] copyRemoteCerts
	I1202 12:56:55.688948   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 12:56:55.688976   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.691486   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.691865   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.691896   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.692056   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.692239   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.692403   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.692524   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:55.777670   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 12:56:55.802466   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1202 12:56:55.826639   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 12:56:55.850536   61173 provision.go:87] duration metric: took 608.183552ms to configureAuth
	I1202 12:56:55.850560   61173 buildroot.go:189] setting minikube options for container-runtime
	I1202 12:56:55.850731   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:56:55.850813   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:55.853607   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.853991   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:55.854024   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:55.854122   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:55.854294   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.854436   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:55.854598   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:55.854734   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:55.854883   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:55.854899   61173 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 12:56:56.083902   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 12:56:56.083931   61173 machine.go:96] duration metric: took 1.195584241s to provisionDockerMachine
	I1202 12:56:56.083944   61173 start.go:293] postStartSetup for "default-k8s-diff-port-653783" (driver="kvm2")
	I1202 12:56:56.083957   61173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 12:56:56.083974   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.084276   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 12:56:56.084307   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.087400   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.087727   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.087750   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.087909   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.088088   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.088272   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.088448   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.170612   61173 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 12:56:56.175344   61173 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 12:56:56.175366   61173 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 12:56:56.175454   61173 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 12:56:56.175529   61173 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 12:56:56.175610   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 12:56:56.185033   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:56:56.209569   61173 start.go:296] duration metric: took 125.611321ms for postStartSetup
	I1202 12:56:56.209605   61173 fix.go:56] duration metric: took 19.673134089s for fixHost
	I1202 12:56:56.209623   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.212600   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.212883   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.212923   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.213137   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.213395   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.213575   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.213708   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.213854   61173 main.go:141] libmachine: Using SSH client type: native
	I1202 12:56:56.214014   61173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.154 22 <nil> <nil>}
	I1202 12:56:56.214032   61173 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 12:56:56.320723   61173 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733144216.287359296
	
	I1202 12:56:56.320744   61173 fix.go:216] guest clock: 1733144216.287359296
	I1202 12:56:56.320753   61173 fix.go:229] Guest: 2024-12-02 12:56:56.287359296 +0000 UTC Remote: 2024-12-02 12:56:56.209609687 +0000 UTC m=+302.261021771 (delta=77.749609ms)
	I1202 12:56:56.320776   61173 fix.go:200] guest clock delta is within tolerance: 77.749609ms
	I1202 12:56:56.320781   61173 start.go:83] releasing machines lock for "default-k8s-diff-port-653783", held for 19.784333398s
	I1202 12:56:56.320797   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.321011   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:56.323778   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.324117   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.324136   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.324289   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324759   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324921   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 12:56:56.324984   61173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 12:56:56.325034   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.325138   61173 ssh_runner.go:195] Run: cat /version.json
	I1202 12:56:56.325164   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 12:56:56.327744   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328000   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328058   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.328083   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328262   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.328373   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:56.328411   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:56.328411   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.328584   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 12:56:56.328584   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.328774   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 12:56:56.328769   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.328908   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 12:56:56.329007   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 12:56:56.405370   61173 ssh_runner.go:195] Run: systemctl --version
	I1202 12:56:56.427743   61173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 12:56:56.574416   61173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 12:56:56.580858   61173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 12:56:56.580948   61173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 12:56:56.597406   61173 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 12:56:56.597427   61173 start.go:495] detecting cgroup driver to use...
	I1202 12:56:56.597472   61173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 12:56:56.612456   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 12:56:56.625811   61173 docker.go:217] disabling cri-docker service (if available) ...
	I1202 12:56:56.625847   61173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 12:56:56.642677   61173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 12:56:56.657471   61173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 12:56:56.776273   61173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 12:56:56.949746   61173 docker.go:233] disabling docker service ...
	I1202 12:56:56.949807   61173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 12:56:56.964275   61173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 12:56:56.977461   61173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 12:56:57.091134   61173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 12:56:57.209421   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 12:56:57.223153   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 12:56:57.241869   61173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 12:56:57.241933   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.252117   61173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 12:56:57.252174   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.262799   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.275039   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.285987   61173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 12:56:57.296968   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.307242   61173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.324555   61173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 12:56:57.335395   61173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 12:56:57.344411   61173 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 12:56:57.344450   61173 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 12:56:57.357400   61173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 12:56:57.366269   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:56:57.486764   61173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 12:56:57.574406   61173 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 12:56:57.574464   61173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 12:56:57.579268   61173 start.go:563] Will wait 60s for crictl version
	I1202 12:56:57.579328   61173 ssh_runner.go:195] Run: which crictl
	I1202 12:56:57.583110   61173 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 12:56:57.621921   61173 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 12:56:57.622003   61173 ssh_runner.go:195] Run: crio --version
	I1202 12:56:57.650543   61173 ssh_runner.go:195] Run: crio --version
	I1202 12:56:57.683842   61173 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 12:56:57.684861   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetIP
	I1202 12:56:57.687188   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:57.687459   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 12:56:57.687505   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 12:56:57.687636   61173 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 12:56:57.691723   61173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:56:57.704869   61173 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 12:56:57.704999   61173 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 12:56:57.705054   61173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:56:57.738780   61173 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 12:56:57.738828   61173 ssh_runner.go:195] Run: which lz4
	I1202 12:56:57.743509   61173 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 12:56:57.747763   61173 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 12:56:57.747784   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 12:56:59.105988   61173 crio.go:462] duration metric: took 1.362506994s to copy over tarball
	I1202 12:56:59.106062   61173 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 12:57:01.191007   61173 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.084920502s)
	I1202 12:57:01.191031   61173 crio.go:469] duration metric: took 2.085014298s to extract the tarball
	I1202 12:57:01.191038   61173 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 12:57:01.229238   61173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 12:57:01.272133   61173 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 12:57:01.272156   61173 cache_images.go:84] Images are preloaded, skipping loading
	I1202 12:57:01.272164   61173 kubeadm.go:934] updating node { 192.168.39.154 8444 v1.31.2 crio true true} ...
	I1202 12:57:01.272272   61173 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.154
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 12:57:01.272330   61173 ssh_runner.go:195] Run: crio config
	I1202 12:57:01.318930   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:57:01.318957   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:57:01.318968   61173 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 12:57:01.318994   61173 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.154 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653783 NodeName:default-k8s-diff-port-653783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.154"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.154 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 12:57:01.319125   61173 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.154
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653783"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.154"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.154"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 12:57:01.319184   61173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 12:57:01.330162   61173 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 12:57:01.330226   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 12:57:01.340217   61173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 12:57:01.356786   61173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 12:57:01.373210   61173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1202 12:57:01.390184   61173 ssh_runner.go:195] Run: grep 192.168.39.154	control-plane.minikube.internal$ /etc/hosts
	I1202 12:57:01.394099   61173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.154	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 12:57:01.406339   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 12:57:01.526518   61173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 12:57:01.543879   61173 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783 for IP: 192.168.39.154
	I1202 12:57:01.543899   61173 certs.go:194] generating shared ca certs ...
	I1202 12:57:01.543920   61173 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 12:57:01.544070   61173 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 12:57:01.544134   61173 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 12:57:01.544147   61173 certs.go:256] generating profile certs ...
	I1202 12:57:01.544285   61173 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/client.key
	I1202 12:57:01.544377   61173 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.key.44fa7240
	I1202 12:57:01.544429   61173 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.key
	I1202 12:57:01.544579   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 12:57:01.544608   61173 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 12:57:01.544617   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 12:57:01.544636   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 12:57:01.544659   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 12:57:01.544688   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 12:57:01.544727   61173 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 12:57:01.545381   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 12:57:01.580933   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 12:57:01.621199   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 12:57:01.648996   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 12:57:01.681428   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1202 12:57:01.710907   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 12:57:01.741414   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 12:57:01.766158   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 12:57:01.789460   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 12:57:01.812569   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 12:57:01.836007   61173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 12:57:01.858137   61173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 12:57:01.874315   61173 ssh_runner.go:195] Run: openssl version
	I1202 12:57:01.880190   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 12:57:01.893051   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.898250   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.898306   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 12:57:01.904207   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 12:57:01.915975   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 12:57:01.927977   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.932436   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.932478   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 12:57:01.938049   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 12:57:01.948744   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 12:57:01.959472   61173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.963806   61173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.963839   61173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 12:57:01.969412   61173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 12:57:01.980743   61173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 12:57:01.986211   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 12:57:01.992717   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 12:57:01.998781   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 12:57:02.004934   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 12:57:02.010903   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 12:57:02.016677   61173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 12:57:02.022595   61173 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-653783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-653783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 12:57:02.022680   61173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 12:57:02.022711   61173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:57:02.060425   61173 cri.go:89] found id: ""
	I1202 12:57:02.060497   61173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 12:57:02.070807   61173 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1202 12:57:02.070827   61173 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1202 12:57:02.070868   61173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 12:57:02.081036   61173 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 12:57:02.082088   61173 kubeconfig.go:125] found "default-k8s-diff-port-653783" server: "https://192.168.39.154:8444"
	I1202 12:57:02.084179   61173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 12:57:02.094381   61173 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.154
	I1202 12:57:02.094429   61173 kubeadm.go:1160] stopping kube-system containers ...
	I1202 12:57:02.094441   61173 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 12:57:02.094485   61173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 12:57:02.129098   61173 cri.go:89] found id: ""
	I1202 12:57:02.129152   61173 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 12:57:02.146731   61173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 12:57:02.156860   61173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 12:57:02.156881   61173 kubeadm.go:157] found existing configuration files:
	
	I1202 12:57:02.156924   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1202 12:57:02.166273   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 12:57:02.166322   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 12:57:02.175793   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1202 12:57:02.184665   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 12:57:02.184707   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 12:57:02.194243   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1202 12:57:02.203173   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 12:57:02.203217   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 12:57:02.212563   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1202 12:57:02.221640   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 12:57:02.221682   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 12:57:02.230764   61173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 12:57:02.241691   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:02.353099   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.283720   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.487082   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.564623   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:03.644136   61173 api_server.go:52] waiting for apiserver process to appear ...
	I1202 12:57:03.644219   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:04.144882   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:04.644873   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.144778   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.645022   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:57:05.662892   61173 api_server.go:72] duration metric: took 2.01875734s to wait for apiserver process to appear ...
	I1202 12:57:05.662920   61173 api_server.go:88] waiting for apiserver healthz status ...
	I1202 12:57:05.662943   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.328451   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 12:57:08.328479   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 12:57:08.328492   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.368504   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 12:57:08.368547   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 12:57:08.664065   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:08.681253   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:57:08.681319   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:57:09.163310   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:09.169674   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 12:57:09.169699   61173 api_server.go:103] status: https://192.168.39.154:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 12:57:09.663220   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 12:57:09.667397   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 200:
	ok
	I1202 12:57:09.675558   61173 api_server.go:141] control plane version: v1.31.2
	I1202 12:57:09.675582   61173 api_server.go:131] duration metric: took 4.012653559s to wait for apiserver health ...
	I1202 12:57:09.675592   61173 cni.go:84] Creating CNI manager for ""
	I1202 12:57:09.675601   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 12:57:09.677275   61173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 12:57:09.678527   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 12:57:09.690640   61173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 12:57:09.708185   61173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 12:57:09.724719   61173 system_pods.go:59] 8 kube-system pods found
	I1202 12:57:09.724747   61173 system_pods.go:61] "coredns-7c65d6cfc9-7g74d" [a35c0ad2-6c02-4e14-afe5-887b3b5fd70f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 12:57:09.724755   61173 system_pods.go:61] "etcd-default-k8s-diff-port-653783" [25bc45db-481f-4c88-853b-105a32e1e8e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 12:57:09.724763   61173 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653783" [af0f2123-8eac-4f90-bc06-1fc1cb10deda] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 12:57:09.724769   61173 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653783" [c18b1705-438b-4954-941e-cfe5a3a0f6fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 12:57:09.724777   61173 system_pods.go:61] "kube-proxy-5t9gh" [35d08e89-5ad8-4fcb-9bff-5c12bc1fb497] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 12:57:09.724782   61173 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653783" [0db501e4-36fb-4a67-b11d-d6d9f3fa1383] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 12:57:09.724789   61173 system_pods.go:61] "metrics-server-6867b74b74-9v79b" [418c7615-5d41-4a24-b497-674f55573a0f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 12:57:09.724794   61173 system_pods.go:61] "storage-provisioner" [dab6b0c7-8e10-435f-a57c-76044eaa11c0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 12:57:09.724799   61173 system_pods.go:74] duration metric: took 16.592713ms to wait for pod list to return data ...
	I1202 12:57:09.724808   61173 node_conditions.go:102] verifying NodePressure condition ...
	I1202 12:57:09.731235   61173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 12:57:09.731260   61173 node_conditions.go:123] node cpu capacity is 2
	I1202 12:57:09.731274   61173 node_conditions.go:105] duration metric: took 6.4605ms to run NodePressure ...
	I1202 12:57:09.731293   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 12:57:10.021346   61173 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1202 12:57:10.025152   61173 kubeadm.go:739] kubelet initialised
	I1202 12:57:10.025171   61173 kubeadm.go:740] duration metric: took 3.798597ms waiting for restarted kubelet to initialise ...
	I1202 12:57:10.025178   61173 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 12:57:10.029834   61173 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.033699   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.033718   61173 pod_ready.go:82] duration metric: took 3.86169ms for pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.033726   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "coredns-7c65d6cfc9-7g74d" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.033731   61173 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.037291   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.037308   61173 pod_ready.go:82] duration metric: took 3.569468ms for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.037317   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.037322   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:10.041016   61173 pod_ready.go:98] node "default-k8s-diff-port-653783" hosting pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.041035   61173 pod_ready.go:82] duration metric: took 3.705222ms for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	E1202 12:57:10.041046   61173 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653783" hosting pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653783" has status "Ready":"False"
	I1202 12:57:10.041071   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:12.047581   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:14.048663   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:16.547831   61173 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:19.047816   61173 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:19.047839   61173 pod_ready.go:82] duration metric: took 9.006753973s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.047850   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5t9gh" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.052277   61173 pod_ready.go:93] pod "kube-proxy-5t9gh" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:19.052296   61173 pod_ready.go:82] duration metric: took 4.440131ms for pod "kube-proxy-5t9gh" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:19.052305   61173 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:21.058989   61173 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:22.558501   61173 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 12:57:22.558524   61173 pod_ready.go:82] duration metric: took 3.506212984s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:22.558533   61173 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" ...
	I1202 12:57:24.564668   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:27.064209   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:30.586451   59162 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1202 12:57:30.586705   59162 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1202 12:57:30.586735   59162 kubeadm.go:310] 
	I1202 12:57:30.586786   59162 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1202 12:57:30.586842   59162 kubeadm.go:310] 		timed out waiting for the condition
	I1202 12:57:30.586859   59162 kubeadm.go:310] 
	I1202 12:57:30.586924   59162 kubeadm.go:310] 	This error is likely caused by:
	I1202 12:57:30.586990   59162 kubeadm.go:310] 		- The kubelet is not running
	I1202 12:57:30.587140   59162 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 12:57:30.587152   59162 kubeadm.go:310] 
	I1202 12:57:30.587292   59162 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 12:57:30.587347   59162 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1202 12:57:30.587387   59162 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1202 12:57:30.587405   59162 kubeadm.go:310] 
	I1202 12:57:30.587557   59162 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1202 12:57:30.587642   59162 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1202 12:57:30.587655   59162 kubeadm.go:310] 
	I1202 12:57:30.587751   59162 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1202 12:57:30.587841   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1202 12:57:30.587923   59162 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1202 12:57:30.588029   59162 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1202 12:57:30.588043   59162 kubeadm.go:310] 
	I1202 12:57:30.588959   59162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 12:57:30.589087   59162 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1202 12:57:30.589211   59162 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1202 12:57:30.589277   59162 kubeadm.go:394] duration metric: took 7m57.557592718s to StartCluster
	I1202 12:57:30.589312   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 12:57:30.589358   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 12:57:30.634368   59162 cri.go:89] found id: ""
	I1202 12:57:30.634402   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.634414   59162 logs.go:284] No container was found matching "kube-apiserver"
	I1202 12:57:30.634423   59162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 12:57:30.634489   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 12:57:30.669582   59162 cri.go:89] found id: ""
	I1202 12:57:30.669605   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.669617   59162 logs.go:284] No container was found matching "etcd"
	I1202 12:57:30.669625   59162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 12:57:30.669679   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 12:57:30.707779   59162 cri.go:89] found id: ""
	I1202 12:57:30.707805   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.707815   59162 logs.go:284] No container was found matching "coredns"
	I1202 12:57:30.707823   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 12:57:30.707878   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 12:57:30.745724   59162 cri.go:89] found id: ""
	I1202 12:57:30.745751   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.745761   59162 logs.go:284] No container was found matching "kube-scheduler"
	I1202 12:57:30.745768   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 12:57:30.745816   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 12:57:30.782946   59162 cri.go:89] found id: ""
	I1202 12:57:30.782969   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.782980   59162 logs.go:284] No container was found matching "kube-proxy"
	I1202 12:57:30.782987   59162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 12:57:30.783040   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 12:57:30.821743   59162 cri.go:89] found id: ""
	I1202 12:57:30.821776   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.821787   59162 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 12:57:30.821795   59162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 12:57:30.821843   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 12:57:30.859754   59162 cri.go:89] found id: ""
	I1202 12:57:30.859783   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.859793   59162 logs.go:284] No container was found matching "kindnet"
	I1202 12:57:30.859801   59162 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 12:57:30.859876   59162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 12:57:30.893632   59162 cri.go:89] found id: ""
	I1202 12:57:30.893660   59162 logs.go:282] 0 containers: []
	W1202 12:57:30.893668   59162 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 12:57:30.893677   59162 logs.go:123] Gathering logs for kubelet ...
	I1202 12:57:30.893690   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 12:57:30.946387   59162 logs.go:123] Gathering logs for dmesg ...
	I1202 12:57:30.946413   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 12:57:30.960540   59162 logs.go:123] Gathering logs for describe nodes ...
	I1202 12:57:30.960565   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 12:57:31.038246   59162 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 12:57:31.038267   59162 logs.go:123] Gathering logs for CRI-O ...
	I1202 12:57:31.038279   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 12:57:31.155549   59162 logs.go:123] Gathering logs for container status ...
	I1202 12:57:31.155584   59162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 12:57:31.221709   59162 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1202 12:57:31.221773   59162 out.go:270] * 
	W1202 12:57:31.221846   59162 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:57:31.221868   59162 out.go:270] * 
	W1202 12:57:31.222987   59162 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 12:57:31.226661   59162 out.go:201] 
	W1202 12:57:31.227691   59162 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 12:57:31.227739   59162 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 12:57:31.227763   59162 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 12:57:31.229696   59162 out.go:201] 
	I1202 12:57:29.064892   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:31.065451   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:33.564442   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:36.064844   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:38.065020   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:40.565467   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:43.065021   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:45.065674   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:47.565692   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:50.064566   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:52.065673   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:54.563919   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:56.565832   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:57:59.064489   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:01.064627   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:03.066470   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:05.565311   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:07.565342   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:10.065050   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:12.565026   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:15.065113   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:17.065377   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:19.570428   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:22.065941   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:24.564883   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:27.064907   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:29.565025   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:31.565662   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:33.566049   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:36.064675   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:38.064820   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:40.065555   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:42.565304   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:44.566076   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:47.064538   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:49.064571   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:51.064914   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:53.065942   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:55.564490   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:58:57.566484   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:00.064321   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:02.065385   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:04.065541   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:06.065687   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:08.564349   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:11.064985   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:13.065285   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:15.565546   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:17.569757   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:20.065490   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:22.565206   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:25.065588   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:27.065818   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:29.066671   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:31.565998   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:34.064527   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:36.064698   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:38.065158   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:40.563432   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:42.571603   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:45.065725   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:47.565321   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:50.065712   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:52.564522   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:55.065989   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:57.563712   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 12:59:59.565908   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:02.065655   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:04.564520   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:07.065360   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:09.566223   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:12.065149   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:14.564989   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:17.064321   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:19.066069   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:21.066247   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:23.564474   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:26.065294   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:28.563804   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:30.565317   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:32.565978   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:35.064896   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:37.065442   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:39.065516   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:41.565297   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:44.064849   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:46.564956   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:49.065151   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:51.065892   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:53.570359   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:56.064144   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:00:58.065042   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:00.065116   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:02.065474   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:04.564036   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:06.564531   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:08.565018   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:10.565163   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:13.065421   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:15.065623   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:17.564985   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:20.065093   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:22.065732   61173 pod_ready.go:103] pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace has status "Ready":"False"
	I1202 13:01:22.559325   61173 pod_ready.go:82] duration metric: took 4m0.000776679s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" ...
	E1202 13:01:22.559360   61173 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-9v79b" in "kube-system" namespace to be "Ready" (will not retry!)
	I1202 13:01:22.559393   61173 pod_ready.go:39] duration metric: took 4m12.534205059s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:01:22.559419   61173 kubeadm.go:597] duration metric: took 4m20.488585813s to restartPrimaryControlPlane
	W1202 13:01:22.559474   61173 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 13:01:22.559501   61173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1202 13:01:48.872503   61173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.312974314s)
	I1202 13:01:48.872571   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 13:01:48.893337   61173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 13:01:48.921145   61173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 13:01:48.934577   61173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 13:01:48.934594   61173 kubeadm.go:157] found existing configuration files:
	
	I1202 13:01:48.934639   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1202 13:01:48.956103   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 13:01:48.956162   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 13:01:48.967585   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1202 13:01:48.984040   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 13:01:48.984084   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 13:01:48.994049   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1202 13:01:49.003811   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 13:01:49.003859   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 13:01:49.013646   61173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1202 13:01:49.023003   61173 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 13:01:49.023051   61173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 13:01:49.032678   61173 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 13:01:49.196294   61173 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 13:01:57.349437   61173 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 13:01:57.349497   61173 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 13:01:57.349571   61173 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 13:01:57.349740   61173 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 13:01:57.349882   61173 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 13:01:57.349976   61173 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 13:01:57.351474   61173 out.go:235]   - Generating certificates and keys ...
	I1202 13:01:57.351576   61173 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 13:01:57.351634   61173 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 13:01:57.351736   61173 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 13:01:57.351842   61173 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1202 13:01:57.351952   61173 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 13:01:57.352035   61173 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1202 13:01:57.352132   61173 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1202 13:01:57.352202   61173 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1202 13:01:57.352325   61173 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 13:01:57.352439   61173 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 13:01:57.352515   61173 kubeadm.go:310] [certs] Using the existing "sa" key
	I1202 13:01:57.352608   61173 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 13:01:57.352689   61173 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 13:01:57.352775   61173 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 13:01:57.352860   61173 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 13:01:57.352962   61173 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 13:01:57.353058   61173 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 13:01:57.353172   61173 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 13:01:57.353295   61173 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 13:01:57.354669   61173 out.go:235]   - Booting up control plane ...
	I1202 13:01:57.354756   61173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 13:01:57.354829   61173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 13:01:57.354884   61173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 13:01:57.354984   61173 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 13:01:57.355073   61173 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 13:01:57.355127   61173 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 13:01:57.355280   61173 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 13:01:57.355435   61173 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 13:01:57.355528   61173 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.24354ms
	I1202 13:01:57.355641   61173 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 13:01:57.355720   61173 kubeadm.go:310] [api-check] The API server is healthy after 5.002367533s
	I1202 13:01:57.355832   61173 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 13:01:57.355945   61173 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 13:01:57.356000   61173 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 13:01:57.356175   61173 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-653783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 13:01:57.356246   61173 kubeadm.go:310] [bootstrap-token] Using token: 0oxhck.9gzdpio1kzs08rgi
	I1202 13:01:57.357582   61173 out.go:235]   - Configuring RBAC rules ...
	I1202 13:01:57.357692   61173 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 13:01:57.357798   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 13:01:57.357973   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 13:01:57.358102   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 13:01:57.358246   61173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 13:01:57.358361   61173 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 13:01:57.358460   61173 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 13:01:57.358497   61173 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 13:01:57.358547   61173 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 13:01:57.358557   61173 kubeadm.go:310] 
	I1202 13:01:57.358615   61173 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 13:01:57.358625   61173 kubeadm.go:310] 
	I1202 13:01:57.358691   61173 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 13:01:57.358698   61173 kubeadm.go:310] 
	I1202 13:01:57.358730   61173 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 13:01:57.358800   61173 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 13:01:57.358878   61173 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 13:01:57.358889   61173 kubeadm.go:310] 
	I1202 13:01:57.358954   61173 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 13:01:57.358961   61173 kubeadm.go:310] 
	I1202 13:01:57.358999   61173 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 13:01:57.359005   61173 kubeadm.go:310] 
	I1202 13:01:57.359047   61173 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 13:01:57.359114   61173 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 13:01:57.359179   61173 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 13:01:57.359185   61173 kubeadm.go:310] 
	I1202 13:01:57.359271   61173 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 13:01:57.359364   61173 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 13:01:57.359377   61173 kubeadm.go:310] 
	I1202 13:01:57.359451   61173 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 0oxhck.9gzdpio1kzs08rgi \
	I1202 13:01:57.359561   61173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
	I1202 13:01:57.359581   61173 kubeadm.go:310] 	--control-plane 
	I1202 13:01:57.359587   61173 kubeadm.go:310] 
	I1202 13:01:57.359666   61173 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 13:01:57.359678   61173 kubeadm.go:310] 
	I1202 13:01:57.359745   61173 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 0oxhck.9gzdpio1kzs08rgi \
	I1202 13:01:57.359848   61173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb 
	I1202 13:01:57.359874   61173 cni.go:84] Creating CNI manager for ""
	I1202 13:01:57.359887   61173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 13:01:57.361282   61173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 13:01:57.362319   61173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 13:01:57.373455   61173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 13:01:57.393003   61173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 13:01:57.393055   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:57.393136   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653783 minikube.k8s.io/updated_at=2024_12_02T13_01_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=default-k8s-diff-port-653783 minikube.k8s.io/primary=true
	I1202 13:01:57.426483   61173 ops.go:34] apiserver oom_adj: -16
	I1202 13:01:57.584458   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:58.084831   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:58.585450   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:59.084976   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:01:59.585068   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:00.085470   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:00.584722   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:01.084770   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:01.585414   61173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 13:02:01.725480   61173 kubeadm.go:1113] duration metric: took 4.332474868s to wait for elevateKubeSystemPrivileges
	I1202 13:02:01.725523   61173 kubeadm.go:394] duration metric: took 4m59.70293206s to StartCluster
	I1202 13:02:01.725545   61173 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:02:01.725633   61173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 13:02:01.730008   61173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:02:01.730438   61173 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.154 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 13:02:01.730586   61173 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 13:02:01.730685   61173 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653783"
	I1202 13:02:01.730703   61173 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653783"
	I1202 13:02:01.730707   61173 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653783"
	I1202 13:02:01.730719   61173 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653783"
	I1202 13:02:01.730734   61173 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653783"
	I1202 13:02:01.730736   61173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653783"
	W1202 13:02:01.730746   61173 addons.go:243] addon metrics-server should already be in state true
	I1202 13:02:01.730776   61173 host.go:66] Checking if "default-k8s-diff-port-653783" exists ...
	W1202 13:02:01.730711   61173 addons.go:243] addon storage-provisioner should already be in state true
	I1202 13:02:01.730865   61173 host.go:66] Checking if "default-k8s-diff-port-653783" exists ...
	I1202 13:02:01.731186   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.731204   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.731215   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.731220   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.731235   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.731255   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.730707   61173 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:02:01.731895   61173 out.go:177] * Verifying Kubernetes components...
	I1202 13:02:01.733515   61173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 13:02:01.748534   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1202 13:02:01.749156   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.749717   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.749743   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.750167   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.750734   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.750771   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.750997   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
	I1202 13:02:01.751714   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44867
	I1202 13:02:01.751911   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.752088   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.752388   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.752406   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.752785   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.753212   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.753240   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.753514   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.753527   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.753807   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.753953   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.756554   61173 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653783"
	W1202 13:02:01.756567   61173 addons.go:243] addon default-storageclass should already be in state true
	I1202 13:02:01.756588   61173 host.go:66] Checking if "default-k8s-diff-port-653783" exists ...
	I1202 13:02:01.756803   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.756824   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.769388   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37033
	I1202 13:02:01.769867   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.770303   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.770328   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.770810   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.770984   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.771974   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I1202 13:02:01.772430   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.773043   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.773068   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.773294   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 13:02:01.773441   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.773707   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.775187   61173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 13:02:01.775514   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 13:02:01.776461   61173 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 13:02:01.776482   61173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 13:02:01.776499   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 13:02:01.776562   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46841
	I1202 13:02:01.776927   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.777077   61173 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1202 13:02:01.777497   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.777509   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.777795   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.778197   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 13:02:01.778215   61173 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 13:02:01.778235   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 13:02:01.778284   61173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:02:01.778315   61173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:02:01.779324   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.780389   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 13:02:01.780472   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.780336   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 13:02:01.780832   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 13:02:01.780996   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 13:02:01.781101   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 13:02:01.781390   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.781588   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 13:02:01.781608   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.781737   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 13:02:01.781886   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 13:02:01.781973   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 13:02:01.782063   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 13:02:01.793947   61173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45243
	I1202 13:02:01.794298   61173 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:02:01.794720   61173 main.go:141] libmachine: Using API Version  1
	I1202 13:02:01.794737   61173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:02:01.795031   61173 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:02:01.795200   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetState
	I1202 13:02:01.796909   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .DriverName
	I1202 13:02:01.797092   61173 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 13:02:01.797104   61173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 13:02:01.797121   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHHostname
	I1202 13:02:01.799831   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.800160   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f6:f0", ip: ""} in network mk-default-k8s-diff-port-653783: {Iface:virbr3 ExpiryTime:2024-12-02 13:56:48 +0000 UTC Type:0 Mac:52:54:00:00:f6:f0 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:default-k8s-diff-port-653783 Clientid:01:52:54:00:00:f6:f0}
	I1202 13:02:01.800191   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | domain default-k8s-diff-port-653783 has defined IP address 192.168.39.154 and MAC address 52:54:00:00:f6:f0 in network mk-default-k8s-diff-port-653783
	I1202 13:02:01.800416   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHPort
	I1202 13:02:01.800595   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHKeyPath
	I1202 13:02:01.800702   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .GetSSHUsername
	I1202 13:02:01.800823   61173 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/default-k8s-diff-port-653783/id_rsa Username:docker}
	I1202 13:02:01.936668   61173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 13:02:01.954328   61173 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653783" to be "Ready" ...
	I1202 13:02:01.968409   61173 node_ready.go:49] node "default-k8s-diff-port-653783" has status "Ready":"True"
	I1202 13:02:01.968427   61173 node_ready.go:38] duration metric: took 14.066432ms for node "default-k8s-diff-port-653783" to be "Ready" ...
	I1202 13:02:01.968436   61173 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:02:01.981818   61173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2stsx" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:02.071558   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 13:02:02.071590   61173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1202 13:02:02.076260   61173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 13:02:02.085318   61173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 13:02:02.098342   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 13:02:02.098363   61173 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 13:02:02.156135   61173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 13:02:02.156165   61173 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 13:02:02.175618   61173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 13:02:02.359810   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:02.359841   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:02.360111   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:02.360201   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:02.360179   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:02.360225   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:02.360246   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:02.360518   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:02.360528   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:02.360532   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:02.366246   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:02.366270   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:02.366633   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:02.366647   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:02.366660   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.134955   61173 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049592704s)
	I1202 13:02:03.135040   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135059   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.135084   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135114   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.135342   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.135392   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.135413   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135432   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.135533   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.135565   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.135584   61173 main.go:141] libmachine: Making call to close driver server
	I1202 13:02:03.135602   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) Calling .Close
	I1202 13:02:03.136554   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.136558   61173 main.go:141] libmachine: Successfully made call to close driver server
	I1202 13:02:03.136569   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.136568   61173 main.go:141] libmachine: (default-k8s-diff-port-653783) DBG | Closing plugin on server side
	I1202 13:02:03.136572   61173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1202 13:02:03.136579   61173 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-653783"
	I1202 13:02:03.138071   61173 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1202 13:02:03.139462   61173 addons.go:510] duration metric: took 1.408893663s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1202 13:02:03.986445   61173 pod_ready.go:93] pod "coredns-7c65d6cfc9-2stsx" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:03.986471   61173 pod_ready.go:82] duration metric: took 2.0046319s for pod "coredns-7c65d6cfc9-2stsx" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:03.986482   61173 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:04.492973   61173 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:04.492995   61173 pod_ready.go:82] duration metric: took 506.506566ms for pod "etcd-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:04.493004   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:06.500118   61173 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 13:02:08.502468   61173 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"False"
	I1202 13:02:08.999764   61173 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:08.999785   61173 pod_ready.go:82] duration metric: took 4.506775084s for pod "kube-apiserver-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:08.999795   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.005354   61173 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:10.005376   61173 pod_ready.go:82] duration metric: took 1.005574607s for pod "kube-controller-manager-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.005385   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d4vw4" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.010948   61173 pod_ready.go:93] pod "kube-proxy-d4vw4" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:10.010964   61173 pod_ready.go:82] duration metric: took 5.574069ms for pod "kube-proxy-d4vw4" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.010972   61173 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.014901   61173 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace has status "Ready":"True"
	I1202 13:02:10.014918   61173 pod_ready.go:82] duration metric: took 3.938654ms for pod "kube-scheduler-default-k8s-diff-port-653783" in "kube-system" namespace to be "Ready" ...
	I1202 13:02:10.014927   61173 pod_ready.go:39] duration metric: took 8.046482137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:02:10.014943   61173 api_server.go:52] waiting for apiserver process to appear ...
	I1202 13:02:10.014994   61173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 13:02:10.032401   61173 api_server.go:72] duration metric: took 8.301924942s to wait for apiserver process to appear ...
	I1202 13:02:10.032418   61173 api_server.go:88] waiting for apiserver healthz status ...
	I1202 13:02:10.032436   61173 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8444/healthz ...
	I1202 13:02:10.036406   61173 api_server.go:279] https://192.168.39.154:8444/healthz returned 200:
	ok
	I1202 13:02:10.037035   61173 api_server.go:141] control plane version: v1.31.2
	I1202 13:02:10.037052   61173 api_server.go:131] duration metric: took 4.627223ms to wait for apiserver health ...
	I1202 13:02:10.037061   61173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 13:02:10.042707   61173 system_pods.go:59] 9 kube-system pods found
	I1202 13:02:10.042731   61173 system_pods.go:61] "coredns-7c65d6cfc9-2qfb5" [13f41c48-90af-4524-98fc-22daf331fbcb] Running
	I1202 13:02:10.042740   61173 system_pods.go:61] "coredns-7c65d6cfc9-2stsx" [3cb9697b-974e-4f8e-9931-38fe3d971940] Running
	I1202 13:02:10.042746   61173 system_pods.go:61] "etcd-default-k8s-diff-port-653783" [adfc38c0-b63b-404d-b279-03f3265f1cf6] Running
	I1202 13:02:10.042752   61173 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653783" [c09effaa-0cea-47db-aca6-8f1d6612b194] Running
	I1202 13:02:10.042758   61173 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653783" [7efc2e68-5d67-4ee7-8b00-e23124acdf63] Running
	I1202 13:02:10.042762   61173 system_pods.go:61] "kube-proxy-d4vw4" [487da76d-2fae-4df0-b663-0cf128ae2911] Running
	I1202 13:02:10.042768   61173 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653783" [94e85eeb-5304-4258-b76b-ac8eb0461069] Running
	I1202 13:02:10.042776   61173 system_pods.go:61] "metrics-server-6867b74b74-tcr8r" [2f017719-26ad-44ca-a44a-e6c20cd6438c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 13:02:10.042782   61173 system_pods.go:61] "storage-provisioner" [8975d342-96fa-4173-b477-e25909ca76da] Running
	I1202 13:02:10.042794   61173 system_pods.go:74] duration metric: took 5.724009ms to wait for pod list to return data ...
	I1202 13:02:10.042800   61173 default_sa.go:34] waiting for default service account to be created ...
	I1202 13:02:10.045407   61173 default_sa.go:45] found service account: "default"
	I1202 13:02:10.045422   61173 default_sa.go:55] duration metric: took 2.615305ms for default service account to be created ...
	I1202 13:02:10.045428   61173 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 13:02:10.050473   61173 system_pods.go:86] 9 kube-system pods found
	I1202 13:02:10.050494   61173 system_pods.go:89] "coredns-7c65d6cfc9-2qfb5" [13f41c48-90af-4524-98fc-22daf331fbcb] Running
	I1202 13:02:10.050499   61173 system_pods.go:89] "coredns-7c65d6cfc9-2stsx" [3cb9697b-974e-4f8e-9931-38fe3d971940] Running
	I1202 13:02:10.050505   61173 system_pods.go:89] "etcd-default-k8s-diff-port-653783" [adfc38c0-b63b-404d-b279-03f3265f1cf6] Running
	I1202 13:02:10.050510   61173 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653783" [c09effaa-0cea-47db-aca6-8f1d6612b194] Running
	I1202 13:02:10.050514   61173 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653783" [7efc2e68-5d67-4ee7-8b00-e23124acdf63] Running
	I1202 13:02:10.050518   61173 system_pods.go:89] "kube-proxy-d4vw4" [487da76d-2fae-4df0-b663-0cf128ae2911] Running
	I1202 13:02:10.050526   61173 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653783" [94e85eeb-5304-4258-b76b-ac8eb0461069] Running
	I1202 13:02:10.050532   61173 system_pods.go:89] "metrics-server-6867b74b74-tcr8r" [2f017719-26ad-44ca-a44a-e6c20cd6438c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 13:02:10.050540   61173 system_pods.go:89] "storage-provisioner" [8975d342-96fa-4173-b477-e25909ca76da] Running
	I1202 13:02:10.050547   61173 system_pods.go:126] duration metric: took 5.115018ms to wait for k8s-apps to be running ...
	I1202 13:02:10.050552   61173 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 13:02:10.050588   61173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 13:02:10.065454   61173 system_svc.go:56] duration metric: took 14.89671ms WaitForService to wait for kubelet
	I1202 13:02:10.065475   61173 kubeadm.go:582] duration metric: took 8.335001135s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 13:02:10.065490   61173 node_conditions.go:102] verifying NodePressure condition ...
	I1202 13:02:10.199102   61173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 13:02:10.199123   61173 node_conditions.go:123] node cpu capacity is 2
	I1202 13:02:10.199136   61173 node_conditions.go:105] duration metric: took 133.639645ms to run NodePressure ...
	I1202 13:02:10.199148   61173 start.go:241] waiting for startup goroutines ...
	I1202 13:02:10.199156   61173 start.go:246] waiting for cluster config update ...
	I1202 13:02:10.199167   61173 start.go:255] writing updated cluster config ...
	I1202 13:02:10.199421   61173 ssh_runner.go:195] Run: rm -f paused
	I1202 13:02:10.246194   61173 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 13:02:10.248146   61173 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653783" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.474945665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144968474914241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79859631-abba-415b-a739-782195abf709 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.475598107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=202397f6-51cf-4b48-8e78-f4a202b2ac87 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.475664451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=202397f6-51cf-4b48-8e78-f4a202b2ac87 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.475704235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=202397f6-51cf-4b48-8e78-f4a202b2ac87 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.509524843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e5310b3-8fae-4d0d-a143-37ad43faf261 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.509613087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e5310b3-8fae-4d0d-a143-37ad43faf261 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.510750270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2967efde-3717-41ef-b455-e9c93c70f2e3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.511138314Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144968511119830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2967efde-3717-41ef-b455-e9c93c70f2e3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.511748899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9702d40-4c7e-46a8-ac08-33ff4eaac5e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.511798075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9702d40-4c7e-46a8-ac08-33ff4eaac5e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.511830476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b9702d40-4c7e-46a8-ac08-33ff4eaac5e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.542519689Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b15be740-86d1-4498-9332-3c07d8384379 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.542609053Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b15be740-86d1-4498-9332-3c07d8384379 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.543398183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=355f791d-9c15-4dff-8134-6ccbe27370b7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.543763706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144968543743382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=355f791d-9c15-4dff-8134-6ccbe27370b7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.544382557Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=094c41e7-9dee-4fd9-8ea2-98db22cf571a name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.544454255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=094c41e7-9dee-4fd9-8ea2-98db22cf571a name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.544486106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=094c41e7-9dee-4fd9-8ea2-98db22cf571a name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.575431284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=350c3ec4-7b9c-4294-9e7b-7bea403f3308 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.575516770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=350c3ec4-7b9c-4294-9e7b-7bea403f3308 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.576302551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04e7c032-4d6b-469c-be66-e20b93b8684f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.576754701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733144968576725314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04e7c032-4d6b-469c-be66-e20b93b8684f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.577261341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30253fae-3e88-4c39-84ff-25aae6decacb name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.577364566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30253fae-3e88-4c39-84ff-25aae6decacb name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:09:28 old-k8s-version-666766 crio[628]: time="2024-12-02 13:09:28.577430198Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=30253fae-3e88-4c39-84ff-25aae6decacb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 12:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056211] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044119] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.145598] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.034204] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.629273] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.854910] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.063738] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078588] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.173629] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.134990] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.253737] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.528775] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +0.061339] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.164841] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[ +11.104852] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 2 12:53] systemd-fstab-generator[5075]: Ignoring "noauto" option for root device
	[Dec 2 12:55] systemd-fstab-generator[5352]: Ignoring "noauto" option for root device
	[  +0.070336] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:09:28 up 20 min,  0 users,  load average: 0.00, 0.01, 0.06
	Linux old-k8s-version-666766 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00095c6f0)
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b11ef0, 0x4f0ac20, 0xc000954550, 0x1, 0xc0001000c0)
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002da380, 0xc0001000c0)
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b95150, 0xc000c760e0)
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 02 13:09:23 old-k8s-version-666766 kubelet[6891]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 02 13:09:23 old-k8s-version-666766 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 02 13:09:23 old-k8s-version-666766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 13:09:24 old-k8s-version-666766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 144.
	Dec 02 13:09:24 old-k8s-version-666766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 02 13:09:24 old-k8s-version-666766 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 02 13:09:24 old-k8s-version-666766 kubelet[6901]: I1202 13:09:24.209366    6901 server.go:416] Version: v1.20.0
	Dec 02 13:09:24 old-k8s-version-666766 kubelet[6901]: I1202 13:09:24.209576    6901 server.go:837] Client rotation is on, will bootstrap in background
	Dec 02 13:09:24 old-k8s-version-666766 kubelet[6901]: I1202 13:09:24.211464    6901 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 02 13:09:24 old-k8s-version-666766 kubelet[6901]: I1202 13:09:24.212469    6901 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Dec 02 13:09:24 old-k8s-version-666766 kubelet[6901]: W1202 13:09:24.212590    6901 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666766 -n old-k8s-version-666766
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 2 (247.845988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-666766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (174.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (155.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-02 13:13:46.252078267 +0000 UTC m=+6205.083863268
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-653783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-653783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.409µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-653783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-653783 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-653783 logs -n 25: (1.613070286s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-256954                             | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo                        | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954                             | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954                             | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo                        | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954                             | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo                        | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | cat /etc/docker/daemon.json                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo                        | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC |                     |
	|         | docker system info                                   |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo                        | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954                             | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo cat                    | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo cat                    | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo                        | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo                        | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954                             | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo cat                    | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954                             | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo                        | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo                        | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo                        | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo                        | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | find /etc/crio -type f -exec                         |                           |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-256954 sudo                        | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | crio config                                          |                           |         |         |                     |                     |
	| delete  | -p custom-flannel-256954                             | custom-flannel-256954     | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	| start   | -p bridge-256954 --memory=3072                       | bridge-256954             | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-256954                         | enable-default-cni-256954 | jenkins | v1.34.0 | 02 Dec 24 13:13 UTC | 02 Dec 24 13:13 UTC |
	|         | pgrep -a kubelet                                     |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 13:13:24
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 13:13:24.206614   73906 out.go:345] Setting OutFile to fd 1 ...
	I1202 13:13:24.206706   73906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 13:13:24.206716   73906 out.go:358] Setting ErrFile to fd 2...
	I1202 13:13:24.206723   73906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 13:13:24.206877   73906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 13:13:24.207394   73906 out.go:352] Setting JSON to false
	I1202 13:13:24.208465   73906 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6956,"bootTime":1733138248,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 13:13:24.208521   73906 start.go:139] virtualization: kvm guest
	I1202 13:13:24.210439   73906 out.go:177] * [bridge-256954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 13:13:24.211592   73906 notify.go:220] Checking for updates...
	I1202 13:13:24.211615   73906 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 13:13:24.213848   73906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 13:13:24.215298   73906 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 13:13:24.216524   73906 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 13:13:24.217566   73906 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 13:13:24.218716   73906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 13:13:24.220248   73906 config.go:182] Loaded profile config "default-k8s-diff-port-653783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:13:24.220370   73906 config.go:182] Loaded profile config "enable-default-cni-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:13:24.220476   73906 config.go:182] Loaded profile config "flannel-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:13:24.220546   73906 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 13:13:24.259568   73906 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 13:13:24.260684   73906 start.go:297] selected driver: kvm2
	I1202 13:13:24.260698   73906 start.go:901] validating driver "kvm2" against <nil>
	I1202 13:13:24.260717   73906 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 13:13:24.261437   73906 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 13:13:24.261506   73906 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 13:13:24.277115   73906 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 13:13:24.277148   73906 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 13:13:24.277429   73906 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 13:13:24.277464   73906 cni.go:84] Creating CNI manager for "bridge"
	I1202 13:13:24.277471   73906 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1202 13:13:24.277526   73906 start.go:340] cluster config:
	{Name:bridge-256954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 13:13:24.277664   73906 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 13:13:24.279175   73906 out.go:177] * Starting "bridge-256954" primary control-plane node in "bridge-256954" cluster
	I1202 13:13:21.669211   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:21.669743   72370 main.go:141] libmachine: (flannel-256954) DBG | unable to find current IP address of domain flannel-256954 in network mk-flannel-256954
	I1202 13:13:21.669774   72370 main.go:141] libmachine: (flannel-256954) DBG | I1202 13:13:21.669700   72393 retry.go:31] will retry after 3.596506189s: waiting for machine to come up
	I1202 13:13:23.736095   70055 pod_ready.go:103] pod "coredns-7c65d6cfc9-zwxjv" in "kube-system" namespace has status "Ready":"False"
	I1202 13:13:26.234606   70055 pod_ready.go:103] pod "coredns-7c65d6cfc9-zwxjv" in "kube-system" namespace has status "Ready":"False"
	I1202 13:13:24.280296   73906 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 13:13:24.280328   73906 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 13:13:24.280342   73906 cache.go:56] Caching tarball of preloaded images
	I1202 13:13:24.280414   73906 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 13:13:24.280430   73906 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 13:13:24.280539   73906 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/bridge-256954/config.json ...
	I1202 13:13:24.280559   73906 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/bridge-256954/config.json: {Name:mkcf7e3ddbe13eff1423de6960bbc724c4ba9c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:13:24.280701   73906 start.go:360] acquireMachinesLock for bridge-256954: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 13:13:25.267794   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:25.268330   72370 main.go:141] libmachine: (flannel-256954) DBG | unable to find current IP address of domain flannel-256954 in network mk-flannel-256954
	I1202 13:13:25.268370   72370 main.go:141] libmachine: (flannel-256954) DBG | I1202 13:13:25.268283   72393 retry.go:31] will retry after 3.202075219s: waiting for machine to come up
	I1202 13:13:28.473301   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:28.473811   72370 main.go:141] libmachine: (flannel-256954) DBG | unable to find current IP address of domain flannel-256954 in network mk-flannel-256954
	I1202 13:13:28.473835   72370 main.go:141] libmachine: (flannel-256954) DBG | I1202 13:13:28.473781   72393 retry.go:31] will retry after 4.555482284s: waiting for machine to come up
	I1202 13:13:28.235512   70055 pod_ready.go:103] pod "coredns-7c65d6cfc9-zwxjv" in "kube-system" namespace has status "Ready":"False"
	I1202 13:13:30.235922   70055 pod_ready.go:103] pod "coredns-7c65d6cfc9-zwxjv" in "kube-system" namespace has status "Ready":"False"
	I1202 13:13:32.735574   70055 pod_ready.go:103] pod "coredns-7c65d6cfc9-zwxjv" in "kube-system" namespace has status "Ready":"False"
	I1202 13:13:33.031213   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:33.031664   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has current primary IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:33.031689   72370 main.go:141] libmachine: (flannel-256954) Found IP for machine: 192.168.50.188
	I1202 13:13:33.031709   72370 main.go:141] libmachine: (flannel-256954) Reserving static IP address...
	I1202 13:13:33.032018   72370 main.go:141] libmachine: (flannel-256954) DBG | unable to find host DHCP lease matching {name: "flannel-256954", mac: "52:54:00:de:93:9d", ip: "192.168.50.188"} in network mk-flannel-256954
	I1202 13:13:33.104129   72370 main.go:141] libmachine: (flannel-256954) DBG | Getting to WaitForSSH function...
	I1202 13:13:33.104154   72370 main.go:141] libmachine: (flannel-256954) Reserved static IP address: 192.168.50.188
	I1202 13:13:33.104166   72370 main.go:141] libmachine: (flannel-256954) Waiting for SSH to be available...
	I1202 13:13:33.106977   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:33.107338   72370 main.go:141] libmachine: (flannel-256954) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954
	I1202 13:13:33.107360   72370 main.go:141] libmachine: (flannel-256954) DBG | unable to find defined IP address of network mk-flannel-256954 interface with MAC address 52:54:00:de:93:9d
	I1202 13:13:33.107538   72370 main.go:141] libmachine: (flannel-256954) DBG | Using SSH client type: external
	I1202 13:13:33.107563   72370 main.go:141] libmachine: (flannel-256954) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/flannel-256954/id_rsa (-rw-------)
	I1202 13:13:33.107601   72370 main.go:141] libmachine: (flannel-256954) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/flannel-256954/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 13:13:33.107618   72370 main.go:141] libmachine: (flannel-256954) DBG | About to run SSH command:
	I1202 13:13:33.107634   72370 main.go:141] libmachine: (flannel-256954) DBG | exit 0
	I1202 13:13:33.111193   72370 main.go:141] libmachine: (flannel-256954) DBG | SSH cmd err, output: exit status 255: 
	I1202 13:13:33.111216   72370 main.go:141] libmachine: (flannel-256954) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1202 13:13:33.111227   72370 main.go:141] libmachine: (flannel-256954) DBG | command : exit 0
	I1202 13:13:33.111238   72370 main.go:141] libmachine: (flannel-256954) DBG | err     : exit status 255
	I1202 13:13:33.111264   72370 main.go:141] libmachine: (flannel-256954) DBG | output  : 
	I1202 13:13:37.400975   73906 start.go:364] duration metric: took 13.120247623s to acquireMachinesLock for "bridge-256954"
	I1202 13:13:37.401066   73906 start.go:93] Provisioning new machine with config: &{Name:bridge-256954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:bridge-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 13:13:37.401214   73906 start.go:125] createHost starting for "" (driver="kvm2")
	I1202 13:13:34.735907   70055 pod_ready.go:103] pod "coredns-7c65d6cfc9-zwxjv" in "kube-system" namespace has status "Ready":"False"
	I1202 13:13:36.736609   70055 pod_ready.go:103] pod "coredns-7c65d6cfc9-zwxjv" in "kube-system" namespace has status "Ready":"False"
	I1202 13:13:37.737677   70055 pod_ready.go:93] pod "coredns-7c65d6cfc9-zwxjv" in "kube-system" namespace has status "Ready":"True"
	I1202 13:13:37.737695   70055 pod_ready.go:82] duration metric: took 26.008146696s for pod "coredns-7c65d6cfc9-zwxjv" in "kube-system" namespace to be "Ready" ...
	I1202 13:13:37.737705   70055 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:13:37.742938   70055 pod_ready.go:93] pod "etcd-enable-default-cni-256954" in "kube-system" namespace has status "Ready":"True"
	I1202 13:13:37.742967   70055 pod_ready.go:82] duration metric: took 5.254121ms for pod "etcd-enable-default-cni-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:13:37.742985   70055 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:13:37.748377   70055 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-256954" in "kube-system" namespace has status "Ready":"True"
	I1202 13:13:37.748398   70055 pod_ready.go:82] duration metric: took 5.400381ms for pod "kube-apiserver-enable-default-cni-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:13:37.748409   70055 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:13:37.752835   70055 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-256954" in "kube-system" namespace has status "Ready":"True"
	I1202 13:13:37.752851   70055 pod_ready.go:82] duration metric: took 4.434564ms for pod "kube-controller-manager-enable-default-cni-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:13:37.752862   70055 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-v55kc" in "kube-system" namespace to be "Ready" ...
	I1202 13:13:37.757659   70055 pod_ready.go:93] pod "kube-proxy-v55kc" in "kube-system" namespace has status "Ready":"True"
	I1202 13:13:37.757675   70055 pod_ready.go:82] duration metric: took 4.807217ms for pod "kube-proxy-v55kc" in "kube-system" namespace to be "Ready" ...
	I1202 13:13:37.757683   70055 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:13:38.133820   70055 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-256954" in "kube-system" namespace has status "Ready":"True"
	I1202 13:13:38.133841   70055 pod_ready.go:82] duration metric: took 376.151534ms for pod "kube-scheduler-enable-default-cni-256954" in "kube-system" namespace to be "Ready" ...
	I1202 13:13:38.133850   70055 pod_ready.go:39] duration metric: took 37.427498738s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 13:13:38.133864   70055 api_server.go:52] waiting for apiserver process to appear ...
	I1202 13:13:38.133917   70055 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 13:13:38.151319   70055 api_server.go:72] duration metric: took 38.755399448s to wait for apiserver process to appear ...
	I1202 13:13:38.151345   70055 api_server.go:88] waiting for apiserver healthz status ...
	I1202 13:13:38.151366   70055 api_server.go:253] Checking apiserver healthz at https://192.168.72.54:8443/healthz ...
	I1202 13:13:36.111754   72370 main.go:141] libmachine: (flannel-256954) DBG | Getting to WaitForSSH function...
	I1202 13:13:36.114124   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.114502   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:36.114526   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.114662   72370 main.go:141] libmachine: (flannel-256954) DBG | Using SSH client type: external
	I1202 13:13:36.114689   72370 main.go:141] libmachine: (flannel-256954) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/flannel-256954/id_rsa (-rw-------)
	I1202 13:13:36.114719   72370 main.go:141] libmachine: (flannel-256954) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/flannel-256954/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1202 13:13:36.114758   72370 main.go:141] libmachine: (flannel-256954) DBG | About to run SSH command:
	I1202 13:13:36.114775   72370 main.go:141] libmachine: (flannel-256954) DBG | exit 0
	I1202 13:13:36.240252   72370 main.go:141] libmachine: (flannel-256954) DBG | SSH cmd err, output: <nil>: 
	I1202 13:13:36.240521   72370 main.go:141] libmachine: (flannel-256954) KVM machine creation complete!
	I1202 13:13:36.240800   72370 main.go:141] libmachine: (flannel-256954) Calling .GetConfigRaw
	I1202 13:13:36.241391   72370 main.go:141] libmachine: (flannel-256954) Calling .DriverName
	I1202 13:13:36.241571   72370 main.go:141] libmachine: (flannel-256954) Calling .DriverName
	I1202 13:13:36.241717   72370 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1202 13:13:36.241734   72370 main.go:141] libmachine: (flannel-256954) Calling .GetState
	I1202 13:13:36.243018   72370 main.go:141] libmachine: Detecting operating system of created instance...
	I1202 13:13:36.243030   72370 main.go:141] libmachine: Waiting for SSH to be available...
	I1202 13:13:36.243035   72370 main.go:141] libmachine: Getting to WaitForSSH function...
	I1202 13:13:36.243040   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHHostname
	I1202 13:13:36.245445   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.245806   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:36.245834   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.245988   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHPort
	I1202 13:13:36.246119   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:36.246268   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:36.246391   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHUsername
	I1202 13:13:36.246596   72370 main.go:141] libmachine: Using SSH client type: native
	I1202 13:13:36.246836   72370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.188 22 <nil> <nil>}
	I1202 13:13:36.246851   72370 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1202 13:13:36.347585   72370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 13:13:36.347614   72370 main.go:141] libmachine: Detecting the provisioner...
	I1202 13:13:36.347624   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHHostname
	I1202 13:13:36.350282   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.350682   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:36.350709   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.350866   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHPort
	I1202 13:13:36.351038   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:36.351187   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:36.351286   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHUsername
	I1202 13:13:36.351449   72370 main.go:141] libmachine: Using SSH client type: native
	I1202 13:13:36.351644   72370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.188 22 <nil> <nil>}
	I1202 13:13:36.351656   72370 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1202 13:13:36.456675   72370 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1202 13:13:36.456736   72370 main.go:141] libmachine: found compatible host: buildroot
	I1202 13:13:36.456745   72370 main.go:141] libmachine: Provisioning with buildroot...
	I1202 13:13:36.456755   72370 main.go:141] libmachine: (flannel-256954) Calling .GetMachineName
	I1202 13:13:36.456973   72370 buildroot.go:166] provisioning hostname "flannel-256954"
	I1202 13:13:36.457008   72370 main.go:141] libmachine: (flannel-256954) Calling .GetMachineName
	I1202 13:13:36.457180   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHHostname
	I1202 13:13:36.459833   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.460143   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:36.460162   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.460362   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHPort
	I1202 13:13:36.460516   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:36.460624   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:36.460763   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHUsername
	I1202 13:13:36.460896   72370 main.go:141] libmachine: Using SSH client type: native
	I1202 13:13:36.461107   72370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.188 22 <nil> <nil>}
	I1202 13:13:36.461124   72370 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-256954 && echo "flannel-256954" | sudo tee /etc/hostname
	I1202 13:13:36.579243   72370 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-256954
	
	I1202 13:13:36.579272   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHHostname
	I1202 13:13:36.581735   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.582045   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:36.582072   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.582277   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHPort
	I1202 13:13:36.582429   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:36.582585   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:36.582708   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHUsername
	I1202 13:13:36.582854   72370 main.go:141] libmachine: Using SSH client type: native
	I1202 13:13:36.583056   72370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.188 22 <nil> <nil>}
	I1202 13:13:36.583075   72370 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-256954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-256954/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-256954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 13:13:36.696760   72370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 13:13:36.696791   72370 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
	I1202 13:13:36.696826   72370 buildroot.go:174] setting up certificates
	I1202 13:13:36.696836   72370 provision.go:84] configureAuth start
	I1202 13:13:36.696845   72370 main.go:141] libmachine: (flannel-256954) Calling .GetMachineName
	I1202 13:13:36.697129   72370 main.go:141] libmachine: (flannel-256954) Calling .GetIP
	I1202 13:13:36.699843   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.700174   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:36.700198   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.700418   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHHostname
	I1202 13:13:36.702438   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.702734   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:36.702760   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.702868   72370 provision.go:143] copyHostCerts
	I1202 13:13:36.702936   72370 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem, removing ...
	I1202 13:13:36.702947   72370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem
	I1202 13:13:36.703031   72370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
	I1202 13:13:36.703175   72370 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem, removing ...
	I1202 13:13:36.703186   72370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem
	I1202 13:13:36.703228   72370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
	I1202 13:13:36.703340   72370 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem, removing ...
	I1202 13:13:36.703351   72370 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem
	I1202 13:13:36.703385   72370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
	I1202 13:13:36.703453   72370 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.flannel-256954 san=[127.0.0.1 192.168.50.188 flannel-256954 localhost minikube]
	I1202 13:13:36.767587   72370 provision.go:177] copyRemoteCerts
	I1202 13:13:36.767657   72370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 13:13:36.767679   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHHostname
	I1202 13:13:36.770143   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.770463   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:36.770486   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.770638   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHPort
	I1202 13:13:36.770804   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:36.770969   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHUsername
	I1202 13:13:36.771127   72370 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/flannel-256954/id_rsa Username:docker}
	I1202 13:13:36.854209   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 13:13:36.879165   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1202 13:13:36.907011   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 13:13:36.929913   72370 provision.go:87] duration metric: took 233.066847ms to configureAuth
	I1202 13:13:36.929936   72370 buildroot.go:189] setting minikube options for container-runtime
	I1202 13:13:36.930074   72370 config.go:182] Loaded profile config "flannel-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 13:13:36.930143   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHHostname
	I1202 13:13:36.932736   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.933055   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:36.933075   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:36.933277   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHPort
	I1202 13:13:36.933465   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:36.933631   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:36.933759   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHUsername
	I1202 13:13:36.933900   72370 main.go:141] libmachine: Using SSH client type: native
	I1202 13:13:36.934094   72370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.188 22 <nil> <nil>}
	I1202 13:13:36.934113   72370 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 13:13:37.157782   72370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 13:13:37.157812   72370 main.go:141] libmachine: Checking connection to Docker...
	I1202 13:13:37.157822   72370 main.go:141] libmachine: (flannel-256954) Calling .GetURL
	I1202 13:13:37.159061   72370 main.go:141] libmachine: (flannel-256954) DBG | Using libvirt version 6000000
	I1202 13:13:37.161020   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.161419   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:37.161449   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.161580   72370 main.go:141] libmachine: Docker is up and running!
	I1202 13:13:37.161595   72370 main.go:141] libmachine: Reticulating splines...
	I1202 13:13:37.161603   72370 client.go:171] duration metric: took 27.254759882s to LocalClient.Create
	I1202 13:13:37.161641   72370 start.go:167] duration metric: took 27.254817061s to libmachine.API.Create "flannel-256954"
	I1202 13:13:37.161653   72370 start.go:293] postStartSetup for "flannel-256954" (driver="kvm2")
	I1202 13:13:37.161668   72370 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 13:13:37.161688   72370 main.go:141] libmachine: (flannel-256954) Calling .DriverName
	I1202 13:13:37.161893   72370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 13:13:37.161917   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHHostname
	I1202 13:13:37.164034   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.164349   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:37.164377   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.164502   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHPort
	I1202 13:13:37.164674   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:37.164822   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHUsername
	I1202 13:13:37.164940   72370 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/flannel-256954/id_rsa Username:docker}
	I1202 13:13:37.246650   72370 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 13:13:37.251013   72370 info.go:137] Remote host: Buildroot 2023.02.9
	I1202 13:13:37.251032   72370 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
	I1202 13:13:37.251086   72370 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
	I1202 13:13:37.251180   72370 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem -> 134162.pem in /etc/ssl/certs
	I1202 13:13:37.251289   72370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 13:13:37.260969   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /etc/ssl/certs/134162.pem (1708 bytes)
	I1202 13:13:37.288289   72370 start.go:296] duration metric: took 126.62303ms for postStartSetup
	I1202 13:13:37.288344   72370 main.go:141] libmachine: (flannel-256954) Calling .GetConfigRaw
	I1202 13:13:37.289023   72370 main.go:141] libmachine: (flannel-256954) Calling .GetIP
	I1202 13:13:37.291125   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.291453   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:37.291471   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.291708   72370 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/config.json ...
	I1202 13:13:37.291909   72370 start.go:128] duration metric: took 27.402462954s to createHost
	I1202 13:13:37.291944   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHHostname
	I1202 13:13:37.294058   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.294336   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:37.294361   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.294490   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHPort
	I1202 13:13:37.294682   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:37.294809   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:37.294959   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHUsername
	I1202 13:13:37.295092   72370 main.go:141] libmachine: Using SSH client type: native
	I1202 13:13:37.295283   72370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.188 22 <nil> <nil>}
	I1202 13:13:37.295294   72370 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1202 13:13:37.400853   72370 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733145217.354176693
	
	I1202 13:13:37.400875   72370 fix.go:216] guest clock: 1733145217.354176693
	I1202 13:13:37.400882   72370 fix.go:229] Guest: 2024-12-02 13:13:37.354176693 +0000 UTC Remote: 2024-12-02 13:13:37.291924992 +0000 UTC m=+27.509218759 (delta=62.251701ms)
	I1202 13:13:37.400899   72370 fix.go:200] guest clock delta is within tolerance: 62.251701ms
	I1202 13:13:37.400913   72370 start.go:83] releasing machines lock for "flannel-256954", held for 27.511525991s
	I1202 13:13:37.400933   72370 main.go:141] libmachine: (flannel-256954) Calling .DriverName
	I1202 13:13:37.401143   72370 main.go:141] libmachine: (flannel-256954) Calling .GetIP
	I1202 13:13:37.403879   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.404294   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:37.404322   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.404425   72370 main.go:141] libmachine: (flannel-256954) Calling .DriverName
	I1202 13:13:37.404855   72370 main.go:141] libmachine: (flannel-256954) Calling .DriverName
	I1202 13:13:37.405040   72370 main.go:141] libmachine: (flannel-256954) Calling .DriverName
	I1202 13:13:37.405145   72370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 13:13:37.405182   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHHostname
	I1202 13:13:37.405233   72370 ssh_runner.go:195] Run: cat /version.json
	I1202 13:13:37.405251   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHHostname
	I1202 13:13:37.407831   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.408038   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.408300   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:37.408346   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.408471   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHPort
	I1202 13:13:37.408564   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:37.408594   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:37.408615   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:37.408762   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHUsername
	I1202 13:13:37.408799   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHPort
	I1202 13:13:37.408913   72370 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/flannel-256954/id_rsa Username:docker}
	I1202 13:13:37.409003   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHKeyPath
	I1202 13:13:37.409117   72370 main.go:141] libmachine: (flannel-256954) Calling .GetSSHUsername
	I1202 13:13:37.409239   72370 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/flannel-256954/id_rsa Username:docker}
	I1202 13:13:37.518020   72370 ssh_runner.go:195] Run: systemctl --version
	I1202 13:13:37.524297   72370 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 13:13:37.691562   72370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 13:13:37.698047   72370 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 13:13:37.698101   72370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 13:13:37.716277   72370 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 13:13:37.716300   72370 start.go:495] detecting cgroup driver to use...
	I1202 13:13:37.716352   72370 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 13:13:37.735544   72370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 13:13:37.752782   72370 docker.go:217] disabling cri-docker service (if available) ...
	I1202 13:13:37.752837   72370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 13:13:37.768779   72370 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 13:13:37.782881   72370 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 13:13:37.896744   72370 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 13:13:38.053710   72370 docker.go:233] disabling docker service ...
	I1202 13:13:38.053796   72370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 13:13:38.072154   72370 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 13:13:38.085621   72370 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 13:13:38.211795   72370 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 13:13:38.331501   72370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 13:13:38.350772   72370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 13:13:38.372659   72370 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 13:13:38.372712   72370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:13:38.383106   72370 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 13:13:38.383155   72370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:13:38.394944   72370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:13:38.405176   72370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:13:38.416129   72370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 13:13:38.427193   72370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:13:38.437291   72370 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:13:38.453945   72370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 13:13:38.464021   72370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 13:13:38.473242   72370 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 13:13:38.473290   72370 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 13:13:38.486168   72370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 13:13:38.496836   72370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 13:13:38.636511   72370 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 13:13:38.753540   72370 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 13:13:38.753615   72370 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 13:13:38.759963   72370 start.go:563] Will wait 60s for crictl version
	I1202 13:13:38.760017   72370 ssh_runner.go:195] Run: which crictl
	I1202 13:13:38.764390   72370 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 13:13:38.809495   72370 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 13:13:38.809565   72370 ssh_runner.go:195] Run: crio --version
	I1202 13:13:38.842336   72370 ssh_runner.go:195] Run: crio --version
	I1202 13:13:38.876781   72370 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1202 13:13:38.156257   70055 api_server.go:279] https://192.168.72.54:8443/healthz returned 200:
	ok
	I1202 13:13:38.157683   70055 api_server.go:141] control plane version: v1.31.2
	I1202 13:13:38.157713   70055 api_server.go:131] duration metric: took 6.359565ms to wait for apiserver health ...
	I1202 13:13:38.157722   70055 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 13:13:38.337217   70055 system_pods.go:59] 7 kube-system pods found
	I1202 13:13:38.337242   70055 system_pods.go:61] "coredns-7c65d6cfc9-zwxjv" [8644a348-3d4d-404c-870f-acd88f1169ae] Running
	I1202 13:13:38.337247   70055 system_pods.go:61] "etcd-enable-default-cni-256954" [c9fb9e60-a790-4811-99f9-0d3641c02294] Running
	I1202 13:13:38.337252   70055 system_pods.go:61] "kube-apiserver-enable-default-cni-256954" [5b52ac58-0033-495a-a4e7-66fd70506d61] Running
	I1202 13:13:38.337256   70055 system_pods.go:61] "kube-controller-manager-enable-default-cni-256954" [38065260-8155-4381-ae3d-a54381d301a8] Running
	I1202 13:13:38.337259   70055 system_pods.go:61] "kube-proxy-v55kc" [762cfb98-597b-4db6-806a-915b20dbac6d] Running
	I1202 13:13:38.337262   70055 system_pods.go:61] "kube-scheduler-enable-default-cni-256954" [692919fd-0d66-4091-adf7-e4808515184c] Running
	I1202 13:13:38.337265   70055 system_pods.go:61] "storage-provisioner" [a152578c-ed2c-4a76-83a8-b42ccdf87e5c] Running
	I1202 13:13:38.337270   70055 system_pods.go:74] duration metric: took 179.543093ms to wait for pod list to return data ...
	I1202 13:13:38.337278   70055 default_sa.go:34] waiting for default service account to be created ...
	I1202 13:13:38.534206   70055 default_sa.go:45] found service account: "default"
	I1202 13:13:38.534234   70055 default_sa.go:55] duration metric: took 196.948806ms for default service account to be created ...
	I1202 13:13:38.534244   70055 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 13:13:38.737642   70055 system_pods.go:86] 7 kube-system pods found
	I1202 13:13:38.737686   70055 system_pods.go:89] "coredns-7c65d6cfc9-zwxjv" [8644a348-3d4d-404c-870f-acd88f1169ae] Running
	I1202 13:13:38.737692   70055 system_pods.go:89] "etcd-enable-default-cni-256954" [c9fb9e60-a790-4811-99f9-0d3641c02294] Running
	I1202 13:13:38.737696   70055 system_pods.go:89] "kube-apiserver-enable-default-cni-256954" [5b52ac58-0033-495a-a4e7-66fd70506d61] Running
	I1202 13:13:38.737701   70055 system_pods.go:89] "kube-controller-manager-enable-default-cni-256954" [38065260-8155-4381-ae3d-a54381d301a8] Running
	I1202 13:13:38.737705   70055 system_pods.go:89] "kube-proxy-v55kc" [762cfb98-597b-4db6-806a-915b20dbac6d] Running
	I1202 13:13:38.737709   70055 system_pods.go:89] "kube-scheduler-enable-default-cni-256954" [692919fd-0d66-4091-adf7-e4808515184c] Running
	I1202 13:13:38.737712   70055 system_pods.go:89] "storage-provisioner" [a152578c-ed2c-4a76-83a8-b42ccdf87e5c] Running
	I1202 13:13:38.737719   70055 system_pods.go:126] duration metric: took 203.46946ms to wait for k8s-apps to be running ...
	I1202 13:13:38.737728   70055 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 13:13:38.737780   70055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 13:13:38.758795   70055 system_svc.go:56] duration metric: took 21.059632ms WaitForService to wait for kubelet
	I1202 13:13:38.758823   70055 kubeadm.go:582] duration metric: took 39.362906888s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 13:13:38.758844   70055 node_conditions.go:102] verifying NodePressure condition ...
	I1202 13:13:38.934523   70055 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 13:13:38.934553   70055 node_conditions.go:123] node cpu capacity is 2
	I1202 13:13:38.934567   70055 node_conditions.go:105] duration metric: took 175.716907ms to run NodePressure ...
	I1202 13:13:38.934582   70055 start.go:241] waiting for startup goroutines ...
	I1202 13:13:38.934591   70055 start.go:246] waiting for cluster config update ...
	I1202 13:13:38.934604   70055 start.go:255] writing updated cluster config ...
	I1202 13:13:38.934945   70055 ssh_runner.go:195] Run: rm -f paused
	I1202 13:13:38.993700   70055 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 13:13:38.995444   70055 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-256954" cluster and "default" namespace by default
	I1202 13:13:37.403092   73906 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1202 13:13:37.403306   73906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 13:13:37.403381   73906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 13:13:37.419480   73906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35287
	I1202 13:13:37.419939   73906 main.go:141] libmachine: () Calling .GetVersion
	I1202 13:13:37.420523   73906 main.go:141] libmachine: Using API Version  1
	I1202 13:13:37.420547   73906 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 13:13:37.420990   73906 main.go:141] libmachine: () Calling .GetMachineName
	I1202 13:13:37.421202   73906 main.go:141] libmachine: (bridge-256954) Calling .GetMachineName
	I1202 13:13:37.421371   73906 main.go:141] libmachine: (bridge-256954) Calling .DriverName
	I1202 13:13:37.421565   73906 start.go:159] libmachine.API.Create for "bridge-256954" (driver="kvm2")
	I1202 13:13:37.421595   73906 client.go:168] LocalClient.Create starting
	I1202 13:13:37.421634   73906 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
	I1202 13:13:37.421675   73906 main.go:141] libmachine: Decoding PEM data...
	I1202 13:13:37.421696   73906 main.go:141] libmachine: Parsing certificate...
	I1202 13:13:37.421786   73906 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
	I1202 13:13:37.421820   73906 main.go:141] libmachine: Decoding PEM data...
	I1202 13:13:37.421839   73906 main.go:141] libmachine: Parsing certificate...
	I1202 13:13:37.421864   73906 main.go:141] libmachine: Running pre-create checks...
	I1202 13:13:37.421876   73906 main.go:141] libmachine: (bridge-256954) Calling .PreCreateCheck
	I1202 13:13:37.422241   73906 main.go:141] libmachine: (bridge-256954) Calling .GetConfigRaw
	I1202 13:13:37.422714   73906 main.go:141] libmachine: Creating machine...
	I1202 13:13:37.422730   73906 main.go:141] libmachine: (bridge-256954) Calling .Create
	I1202 13:13:37.422890   73906 main.go:141] libmachine: (bridge-256954) Creating KVM machine...
	I1202 13:13:37.423981   73906 main.go:141] libmachine: (bridge-256954) DBG | found existing default KVM network
	I1202 13:13:37.425150   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:37.424998   73994 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:4f:fe} reservation:<nil>}
	I1202 13:13:37.426003   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:37.425943   73994 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:37:20:ed} reservation:<nil>}
	I1202 13:13:37.427089   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:37.427021   73994 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000215d60}
	I1202 13:13:37.427153   73906 main.go:141] libmachine: (bridge-256954) DBG | created network xml: 
	I1202 13:13:37.427175   73906 main.go:141] libmachine: (bridge-256954) DBG | <network>
	I1202 13:13:37.427203   73906 main.go:141] libmachine: (bridge-256954) DBG |   <name>mk-bridge-256954</name>
	I1202 13:13:37.427222   73906 main.go:141] libmachine: (bridge-256954) DBG |   <dns enable='no'/>
	I1202 13:13:37.427234   73906 main.go:141] libmachine: (bridge-256954) DBG |   
	I1202 13:13:37.427244   73906 main.go:141] libmachine: (bridge-256954) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1202 13:13:37.427256   73906 main.go:141] libmachine: (bridge-256954) DBG |     <dhcp>
	I1202 13:13:37.427269   73906 main.go:141] libmachine: (bridge-256954) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1202 13:13:37.427280   73906 main.go:141] libmachine: (bridge-256954) DBG |     </dhcp>
	I1202 13:13:37.427290   73906 main.go:141] libmachine: (bridge-256954) DBG |   </ip>
	I1202 13:13:37.427297   73906 main.go:141] libmachine: (bridge-256954) DBG |   
	I1202 13:13:37.427306   73906 main.go:141] libmachine: (bridge-256954) DBG | </network>
	I1202 13:13:37.427315   73906 main.go:141] libmachine: (bridge-256954) DBG | 
	I1202 13:13:37.432413   73906 main.go:141] libmachine: (bridge-256954) DBG | trying to create private KVM network mk-bridge-256954 192.168.61.0/24...
	I1202 13:13:37.505179   73906 main.go:141] libmachine: (bridge-256954) DBG | private KVM network mk-bridge-256954 192.168.61.0/24 created
	I1202 13:13:37.505213   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:37.505147   73994 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 13:13:37.505228   73906 main.go:141] libmachine: (bridge-256954) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/bridge-256954 ...
	I1202 13:13:37.505246   73906 main.go:141] libmachine: (bridge-256954) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 13:13:37.505270   73906 main.go:141] libmachine: (bridge-256954) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1202 13:13:37.758266   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:37.758159   73994 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/bridge-256954/id_rsa...
	I1202 13:13:37.831914   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:37.831804   73994 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/bridge-256954/bridge-256954.rawdisk...
	I1202 13:13:37.831936   73906 main.go:141] libmachine: (bridge-256954) DBG | Writing magic tar header
	I1202 13:13:37.831945   73906 main.go:141] libmachine: (bridge-256954) DBG | Writing SSH key tar header
	I1202 13:13:37.832075   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:37.831939   73994 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/bridge-256954 ...
	I1202 13:13:37.832125   73906 main.go:141] libmachine: (bridge-256954) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/bridge-256954 (perms=drwx------)
	I1202 13:13:37.832141   73906 main.go:141] libmachine: (bridge-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/bridge-256954
	I1202 13:13:37.832161   73906 main.go:141] libmachine: (bridge-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
	I1202 13:13:37.832178   73906 main.go:141] libmachine: (bridge-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 13:13:37.832191   73906 main.go:141] libmachine: (bridge-256954) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
	I1202 13:13:37.832207   73906 main.go:141] libmachine: (bridge-256954) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
	I1202 13:13:37.832217   73906 main.go:141] libmachine: (bridge-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
	I1202 13:13:37.832249   73906 main.go:141] libmachine: (bridge-256954) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
	I1202 13:13:37.832264   73906 main.go:141] libmachine: (bridge-256954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1202 13:13:37.832273   73906 main.go:141] libmachine: (bridge-256954) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 13:13:37.832290   73906 main.go:141] libmachine: (bridge-256954) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 13:13:37.832300   73906 main.go:141] libmachine: (bridge-256954) Creating domain...
	I1202 13:13:37.832312   73906 main.go:141] libmachine: (bridge-256954) DBG | Checking permissions on dir: /home/jenkins
	I1202 13:13:37.832321   73906 main.go:141] libmachine: (bridge-256954) DBG | Checking permissions on dir: /home
	I1202 13:13:37.832331   73906 main.go:141] libmachine: (bridge-256954) DBG | Skipping /home - not owner
	I1202 13:13:37.833569   73906 main.go:141] libmachine: (bridge-256954) define libvirt domain using xml: 
	I1202 13:13:37.833589   73906 main.go:141] libmachine: (bridge-256954) <domain type='kvm'>
	I1202 13:13:37.833598   73906 main.go:141] libmachine: (bridge-256954)   <name>bridge-256954</name>
	I1202 13:13:37.833606   73906 main.go:141] libmachine: (bridge-256954)   <memory unit='MiB'>3072</memory>
	I1202 13:13:37.833614   73906 main.go:141] libmachine: (bridge-256954)   <vcpu>2</vcpu>
	I1202 13:13:37.833620   73906 main.go:141] libmachine: (bridge-256954)   <features>
	I1202 13:13:37.833626   73906 main.go:141] libmachine: (bridge-256954)     <acpi/>
	I1202 13:13:37.833632   73906 main.go:141] libmachine: (bridge-256954)     <apic/>
	I1202 13:13:37.833640   73906 main.go:141] libmachine: (bridge-256954)     <pae/>
	I1202 13:13:37.833645   73906 main.go:141] libmachine: (bridge-256954)     
	I1202 13:13:37.833652   73906 main.go:141] libmachine: (bridge-256954)   </features>
	I1202 13:13:37.833658   73906 main.go:141] libmachine: (bridge-256954)   <cpu mode='host-passthrough'>
	I1202 13:13:37.833664   73906 main.go:141] libmachine: (bridge-256954)   
	I1202 13:13:37.833669   73906 main.go:141] libmachine: (bridge-256954)   </cpu>
	I1202 13:13:37.833676   73906 main.go:141] libmachine: (bridge-256954)   <os>
	I1202 13:13:37.833687   73906 main.go:141] libmachine: (bridge-256954)     <type>hvm</type>
	I1202 13:13:37.833695   73906 main.go:141] libmachine: (bridge-256954)     <boot dev='cdrom'/>
	I1202 13:13:37.833713   73906 main.go:141] libmachine: (bridge-256954)     <boot dev='hd'/>
	I1202 13:13:37.833722   73906 main.go:141] libmachine: (bridge-256954)     <bootmenu enable='no'/>
	I1202 13:13:37.833727   73906 main.go:141] libmachine: (bridge-256954)   </os>
	I1202 13:13:37.833735   73906 main.go:141] libmachine: (bridge-256954)   <devices>
	I1202 13:13:37.833743   73906 main.go:141] libmachine: (bridge-256954)     <disk type='file' device='cdrom'>
	I1202 13:13:37.833754   73906 main.go:141] libmachine: (bridge-256954)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/bridge-256954/boot2docker.iso'/>
	I1202 13:13:37.833760   73906 main.go:141] libmachine: (bridge-256954)       <target dev='hdc' bus='scsi'/>
	I1202 13:13:37.833783   73906 main.go:141] libmachine: (bridge-256954)       <readonly/>
	I1202 13:13:37.833800   73906 main.go:141] libmachine: (bridge-256954)     </disk>
	I1202 13:13:37.833828   73906 main.go:141] libmachine: (bridge-256954)     <disk type='file' device='disk'>
	I1202 13:13:37.833861   73906 main.go:141] libmachine: (bridge-256954)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1202 13:13:37.833878   73906 main.go:141] libmachine: (bridge-256954)       <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/bridge-256954/bridge-256954.rawdisk'/>
	I1202 13:13:37.833886   73906 main.go:141] libmachine: (bridge-256954)       <target dev='hda' bus='virtio'/>
	I1202 13:13:37.833897   73906 main.go:141] libmachine: (bridge-256954)     </disk>
	I1202 13:13:37.833904   73906 main.go:141] libmachine: (bridge-256954)     <interface type='network'>
	I1202 13:13:37.833913   73906 main.go:141] libmachine: (bridge-256954)       <source network='mk-bridge-256954'/>
	I1202 13:13:37.833920   73906 main.go:141] libmachine: (bridge-256954)       <model type='virtio'/>
	I1202 13:13:37.833933   73906 main.go:141] libmachine: (bridge-256954)     </interface>
	I1202 13:13:37.833945   73906 main.go:141] libmachine: (bridge-256954)     <interface type='network'>
	I1202 13:13:37.833954   73906 main.go:141] libmachine: (bridge-256954)       <source network='default'/>
	I1202 13:13:37.833961   73906 main.go:141] libmachine: (bridge-256954)       <model type='virtio'/>
	I1202 13:13:37.833968   73906 main.go:141] libmachine: (bridge-256954)     </interface>
	I1202 13:13:37.833975   73906 main.go:141] libmachine: (bridge-256954)     <serial type='pty'>
	I1202 13:13:37.833982   73906 main.go:141] libmachine: (bridge-256954)       <target port='0'/>
	I1202 13:13:37.833988   73906 main.go:141] libmachine: (bridge-256954)     </serial>
	I1202 13:13:37.833998   73906 main.go:141] libmachine: (bridge-256954)     <console type='pty'>
	I1202 13:13:37.834005   73906 main.go:141] libmachine: (bridge-256954)       <target type='serial' port='0'/>
	I1202 13:13:37.834019   73906 main.go:141] libmachine: (bridge-256954)     </console>
	I1202 13:13:37.834028   73906 main.go:141] libmachine: (bridge-256954)     <rng model='virtio'>
	I1202 13:13:37.834038   73906 main.go:141] libmachine: (bridge-256954)       <backend model='random'>/dev/random</backend>
	I1202 13:13:37.834043   73906 main.go:141] libmachine: (bridge-256954)     </rng>
	I1202 13:13:37.834048   73906 main.go:141] libmachine: (bridge-256954)     
	I1202 13:13:37.834053   73906 main.go:141] libmachine: (bridge-256954)     
	I1202 13:13:37.834061   73906 main.go:141] libmachine: (bridge-256954)   </devices>
	I1202 13:13:37.834068   73906 main.go:141] libmachine: (bridge-256954) </domain>
	I1202 13:13:37.834077   73906 main.go:141] libmachine: (bridge-256954) 
	I1202 13:13:37.839347   73906 main.go:141] libmachine: (bridge-256954) DBG | domain bridge-256954 has defined MAC address 52:54:00:a2:76:4b in network default
	I1202 13:13:37.839953   73906 main.go:141] libmachine: (bridge-256954) Ensuring networks are active...
	I1202 13:13:37.839970   73906 main.go:141] libmachine: (bridge-256954) DBG | domain bridge-256954 has defined MAC address 52:54:00:41:1a:fb in network mk-bridge-256954
	I1202 13:13:37.840645   73906 main.go:141] libmachine: (bridge-256954) Ensuring network default is active
	I1202 13:13:37.841033   73906 main.go:141] libmachine: (bridge-256954) Ensuring network mk-bridge-256954 is active
	I1202 13:13:37.841577   73906 main.go:141] libmachine: (bridge-256954) Getting domain xml...
	I1202 13:13:37.842380   73906 main.go:141] libmachine: (bridge-256954) Creating domain...
	I1202 13:13:38.877936   72370 main.go:141] libmachine: (flannel-256954) Calling .GetIP
	I1202 13:13:38.881534   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:38.881962   72370 main.go:141] libmachine: (flannel-256954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:93:9d", ip: ""} in network mk-flannel-256954: {Iface:virbr2 ExpiryTime:2024-12-02 14:13:26 +0000 UTC Type:0 Mac:52:54:00:de:93:9d Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:flannel-256954 Clientid:01:52:54:00:de:93:9d}
	I1202 13:13:38.881991   72370 main.go:141] libmachine: (flannel-256954) DBG | domain flannel-256954 has defined IP address 192.168.50.188 and MAC address 52:54:00:de:93:9d in network mk-flannel-256954
	I1202 13:13:38.882237   72370 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1202 13:13:38.887038   72370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 13:13:38.901112   72370 kubeadm.go:883] updating cluster {Name:flannel-256954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:flannel-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.188 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 13:13:38.901207   72370 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 13:13:38.901245   72370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 13:13:38.937445   72370 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1202 13:13:38.937503   72370 ssh_runner.go:195] Run: which lz4
	I1202 13:13:38.942764   72370 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 13:13:38.947182   72370 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 13:13:38.947209   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1202 13:13:39.236649   73906 main.go:141] libmachine: (bridge-256954) Waiting to get IP...
	I1202 13:13:39.237600   73906 main.go:141] libmachine: (bridge-256954) DBG | domain bridge-256954 has defined MAC address 52:54:00:41:1a:fb in network mk-bridge-256954
	I1202 13:13:39.238021   73906 main.go:141] libmachine: (bridge-256954) DBG | unable to find current IP address of domain bridge-256954 in network mk-bridge-256954
	I1202 13:13:39.238080   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:39.238012   73994 retry.go:31] will retry after 204.572181ms: waiting for machine to come up
	I1202 13:13:39.446127   73906 main.go:141] libmachine: (bridge-256954) DBG | domain bridge-256954 has defined MAC address 52:54:00:41:1a:fb in network mk-bridge-256954
	I1202 13:13:39.452444   73906 main.go:141] libmachine: (bridge-256954) DBG | unable to find current IP address of domain bridge-256954 in network mk-bridge-256954
	I1202 13:13:39.452474   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:39.452385   73994 retry.go:31] will retry after 255.034855ms: waiting for machine to come up
	I1202 13:13:39.708753   73906 main.go:141] libmachine: (bridge-256954) DBG | domain bridge-256954 has defined MAC address 52:54:00:41:1a:fb in network mk-bridge-256954
	I1202 13:13:39.709289   73906 main.go:141] libmachine: (bridge-256954) DBG | unable to find current IP address of domain bridge-256954 in network mk-bridge-256954
	I1202 13:13:39.709330   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:39.709225   73994 retry.go:31] will retry after 399.680876ms: waiting for machine to come up
	I1202 13:13:40.110962   73906 main.go:141] libmachine: (bridge-256954) DBG | domain bridge-256954 has defined MAC address 52:54:00:41:1a:fb in network mk-bridge-256954
	I1202 13:13:40.111556   73906 main.go:141] libmachine: (bridge-256954) DBG | unable to find current IP address of domain bridge-256954 in network mk-bridge-256954
	I1202 13:13:40.111587   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:40.111524   73994 retry.go:31] will retry after 440.449263ms: waiting for machine to come up
	I1202 13:13:40.553049   73906 main.go:141] libmachine: (bridge-256954) DBG | domain bridge-256954 has defined MAC address 52:54:00:41:1a:fb in network mk-bridge-256954
	I1202 13:13:40.553526   73906 main.go:141] libmachine: (bridge-256954) DBG | unable to find current IP address of domain bridge-256954 in network mk-bridge-256954
	I1202 13:13:40.553554   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:40.553473   73994 retry.go:31] will retry after 484.904618ms: waiting for machine to come up
	I1202 13:13:41.040218   73906 main.go:141] libmachine: (bridge-256954) DBG | domain bridge-256954 has defined MAC address 52:54:00:41:1a:fb in network mk-bridge-256954
	I1202 13:13:41.040752   73906 main.go:141] libmachine: (bridge-256954) DBG | unable to find current IP address of domain bridge-256954 in network mk-bridge-256954
	I1202 13:13:41.040780   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:41.040705   73994 retry.go:31] will retry after 704.35514ms: waiting for machine to come up
	I1202 13:13:41.746505   73906 main.go:141] libmachine: (bridge-256954) DBG | domain bridge-256954 has defined MAC address 52:54:00:41:1a:fb in network mk-bridge-256954
	I1202 13:13:41.747071   73906 main.go:141] libmachine: (bridge-256954) DBG | unable to find current IP address of domain bridge-256954 in network mk-bridge-256954
	I1202 13:13:41.747098   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:41.746981   73994 retry.go:31] will retry after 1.171563878s: waiting for machine to come up
	I1202 13:13:42.920635   73906 main.go:141] libmachine: (bridge-256954) DBG | domain bridge-256954 has defined MAC address 52:54:00:41:1a:fb in network mk-bridge-256954
	I1202 13:13:42.921236   73906 main.go:141] libmachine: (bridge-256954) DBG | unable to find current IP address of domain bridge-256954 in network mk-bridge-256954
	I1202 13:13:42.921260   73906 main.go:141] libmachine: (bridge-256954) DBG | I1202 13:13:42.921186   73994 retry.go:31] will retry after 1.405092222s: waiting for machine to come up
	I1202 13:13:40.458660   72370 crio.go:462] duration metric: took 1.515925575s to copy over tarball
	I1202 13:13:40.458737   72370 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 13:13:42.975625   72370 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.516833705s)
	I1202 13:13:42.975660   72370 crio.go:469] duration metric: took 2.516971522s to extract the tarball
	I1202 13:13:42.975670   72370 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 13:13:43.016380   72370 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 13:13:43.071615   72370 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 13:13:43.071643   72370 cache_images.go:84] Images are preloaded, skipping loading
	I1202 13:13:43.071652   72370 kubeadm.go:934] updating node { 192.168.50.188 8443 v1.31.2 crio true true} ...
	I1202 13:13:43.071788   72370 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-256954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:flannel-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1202 13:13:43.071872   72370 ssh_runner.go:195] Run: crio config
	I1202 13:13:43.131467   72370 cni.go:84] Creating CNI manager for "flannel"
	I1202 13:13:43.131490   72370 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 13:13:43.131518   72370 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.188 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-256954 NodeName:flannel-256954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 13:13:43.131671   72370 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-256954"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.188"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.188"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 13:13:43.131741   72370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 13:13:43.142417   72370 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 13:13:43.142519   72370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 13:13:43.153108   72370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1202 13:13:43.173179   72370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 13:13:43.194477   72370 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1202 13:13:43.215365   72370 ssh_runner.go:195] Run: grep 192.168.50.188	control-plane.minikube.internal$ /etc/hosts
	I1202 13:13:43.220644   72370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 13:13:43.236922   72370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 13:13:43.395865   72370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 13:13:43.419462   72370 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954 for IP: 192.168.50.188
	I1202 13:13:43.419483   72370 certs.go:194] generating shared ca certs ...
	I1202 13:13:43.419503   72370 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:13:43.419697   72370 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
	I1202 13:13:43.419755   72370 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
	I1202 13:13:43.419768   72370 certs.go:256] generating profile certs ...
	I1202 13:13:43.419837   72370 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/client.key
	I1202 13:13:43.419866   72370 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/client.crt with IP's: []
	I1202 13:13:43.503348   72370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/client.crt ...
	I1202 13:13:43.503378   72370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/client.crt: {Name:mkd8cef3137fd20626e082700d292fd6b43479b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:13:43.503555   72370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/client.key ...
	I1202 13:13:43.503568   72370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/client.key: {Name:mk1111cc8d2344042daecda7dbda60b9990b114c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:13:43.504203   72370 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/apiserver.key.a806f775
	I1202 13:13:43.504248   72370 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/apiserver.crt.a806f775 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.188]
	I1202 13:13:43.597093   72370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/apiserver.crt.a806f775 ...
	I1202 13:13:43.597118   72370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/apiserver.crt.a806f775: {Name:mk3cd0f713534363218e11bcadcf02ecbe6a3deb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:13:43.597319   72370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/apiserver.key.a806f775 ...
	I1202 13:13:43.597336   72370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/apiserver.key.a806f775: {Name:mk2fa605d54fac6bb5764965c8e4534b69a39304 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:13:43.597449   72370 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/apiserver.crt.a806f775 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/apiserver.crt
	I1202 13:13:43.597557   72370 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/apiserver.key.a806f775 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/apiserver.key
	I1202 13:13:43.597613   72370 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/proxy-client.key
	I1202 13:13:43.597628   72370 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/proxy-client.crt with IP's: []
	I1202 13:13:43.748921   72370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/proxy-client.crt ...
	I1202 13:13:43.748953   72370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/proxy-client.crt: {Name:mk9446dfe76b5144d2bf4b616bff96aa27f90616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:13:43.749144   72370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/proxy-client.key ...
	I1202 13:13:43.749162   72370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/proxy-client.key: {Name:mk5144fa044c447d86c6a0024f2656f880d11ef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 13:13:43.749405   72370 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem (1338 bytes)
	W1202 13:13:43.749456   72370 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416_empty.pem, impossibly tiny 0 bytes
	I1202 13:13:43.749470   72370 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 13:13:43.749504   72370 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
	I1202 13:13:43.749538   72370 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
	I1202 13:13:43.749568   72370 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
	I1202 13:13:43.749620   72370 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem (1708 bytes)
	I1202 13:13:43.750413   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 13:13:43.786997   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 13:13:43.814309   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 13:13:43.841565   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 13:13:43.869294   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 13:13:43.898945   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 13:13:43.926401   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 13:13:43.953488   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/flannel-256954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 13:13:43.980553   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/ssl/certs/134162.pem --> /usr/share/ca-certificates/134162.pem (1708 bytes)
	I1202 13:13:44.005053   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 13:13:44.031672   72370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/13416.pem --> /usr/share/ca-certificates/13416.pem (1338 bytes)
	I1202 13:13:44.058242   72370 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 13:13:44.076918   72370 ssh_runner.go:195] Run: openssl version
	I1202 13:13:44.082801   72370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13416.pem && ln -fs /usr/share/ca-certificates/13416.pem /etc/ssl/certs/13416.pem"
	I1202 13:13:44.094084   72370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13416.pem
	I1202 13:13:44.098923   72370 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:42 /usr/share/ca-certificates/13416.pem
	I1202 13:13:44.098974   72370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13416.pem
	I1202 13:13:44.105158   72370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13416.pem /etc/ssl/certs/51391683.0"
	I1202 13:13:44.125782   72370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134162.pem && ln -fs /usr/share/ca-certificates/134162.pem /etc/ssl/certs/134162.pem"
	I1202 13:13:44.159674   72370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134162.pem
	I1202 13:13:44.166641   72370 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:42 /usr/share/ca-certificates/134162.pem
	I1202 13:13:44.166697   72370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134162.pem
	I1202 13:13:44.175504   72370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/134162.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 13:13:44.187620   72370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 13:13:44.205051   72370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 13:13:44.211371   72370 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 13:13:44.211427   72370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 13:13:44.217756   72370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 13:13:44.232580   72370 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 13:13:44.238270   72370 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 13:13:44.238329   72370 kubeadm.go:392] StartCluster: {Name:flannel-256954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:flannel-256954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.188 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 13:13:44.238429   72370 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 13:13:44.238497   72370 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 13:13:44.285680   72370 cri.go:89] found id: ""
	I1202 13:13:44.285750   72370 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 13:13:44.299921   72370 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 13:13:44.313371   72370 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 13:13:44.326395   72370 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 13:13:44.326415   72370 kubeadm.go:157] found existing configuration files:
	
	I1202 13:13:44.326460   72370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 13:13:44.338960   72370 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 13:13:44.339018   72370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 13:13:44.351906   72370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 13:13:44.361967   72370 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 13:13:44.362022   72370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 13:13:44.372366   72370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 13:13:44.383356   72370 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 13:13:44.383420   72370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 13:13:44.396400   72370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 13:13:44.408698   72370 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 13:13:44.408766   72370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 13:13:44.422027   72370 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 13:13:44.487345   72370 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 13:13:44.487431   72370 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 13:13:44.603866   72370 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 13:13:44.604013   72370 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 13:13:44.604128   72370 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 13:13:44.621641   72370 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 13:13:44.643579   72370 out.go:235]   - Generating certificates and keys ...
	I1202 13:13:44.643699   72370 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 13:13:44.643819   72370 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 13:13:44.729834   72370 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 13:13:44.902358   72370 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 13:13:44.986181   72370 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 13:13:45.110731   72370 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 13:13:45.416041   72370 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 13:13:45.416417   72370 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-256954 localhost] and IPs [192.168.50.188 127.0.0.1 ::1]
	I1202 13:13:45.512606   72370 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 13:13:45.512935   72370 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-256954 localhost] and IPs [192.168.50.188 127.0.0.1 ::1]
	I1202 13:13:45.597482   72370 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 13:13:45.744504   72370 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 13:13:45.851198   72370 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 13:13:45.851605   72370 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 13:13:45.960393   72370 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 13:13:46.160384   72370 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 13:13:46.298454   72370 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 13:13:46.675977   72370 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 13:13:46.812607   72370 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 13:13:46.813336   72370 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 13:13:46.815825   72370 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.260947103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145227260912584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9c681ac-6eef-4989-bae6-60a79562d1b8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.261774935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29362ad5-79a7-4dac-acb5-1db92d62c9fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.261847973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29362ad5-79a7-4dac-acb5-1db92d62c9fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.262134583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c76c8542ddb2ce9e524530c6ad79d85c36c37c5d4119710797242d78cea690e,PodSandboxId:722462dbc74c301fa266d09c7ba590c167433a59d9b9c6912d0239c1a3338ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733144523742297137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8975d342-96fa-4173-b477-e25909ca76da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a33522f73a005d2628766c9032face2c7fa8861bc0c9d09ec979807351e9b2,PodSandboxId:976114a238b6804be12a7e2fa8070e45e1b21cd1182edec636f36738550adf1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523370291404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qfb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f41c48-90af-4524-98fc-22daf331fbcb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c38436dcda43a17cf61a8146b3c1d19f1a0ee1233489ecc29d5ff5c68fcd3e8,PodSandboxId:e31a2cd9e150c74180fb3121656a4ec47ed75c03625f9e84580488698f96d34f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523271914015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2stsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3cb9697b-974e-4f8e-9931-38fe3d971940,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a0bb9ef86b50ae744e65e5cd0a1a58f30aa67e7821baf187f638fd571cdafd,PodSandboxId:868c587c843e0c467fdc7a4a30aa1a348364c226b3dfe5a3d377b38c1aecb1c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733144522724749828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4vw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487da76d-2fae-4df0-b663-0cf128ae2911,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70644d4df6533d29560af5b8c39b457a48d4eb561c44a3cae2de93df9d2e95c,PodSandboxId:994247c1b27812d31d244c616f1dc451310ecfb18089e8125b9907cc2007ca1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173314451143922457
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5c277760a5f64606204d89db056873,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6650cc0efc8c6fac5cc33bc78272a3b147cb0d52f09a1f3c30c6bfff7810f03,PodSandboxId:4ba9e13405b6d88815f3548cdf171ece54ca9366355b0d5dd2f6eb4b0e475e08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144511436998108,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455f46ddd7a39b0e3d7b6679ac2233a36929bae665d35641d54404bd28617fd8,PodSandboxId:d60121c1e9a8e49d70b3cee0f6562ab3f2dbc4c5a7733b59363b9d45a591060a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144511370
135638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a43b5cbe13c8df408c11119c9d4af05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51b7d1118274da0fc4eca30648961649905ca73906bed87219bd3121ed7e762,PodSandboxId:fc1f450a1e85ee390ba9ad0b0008e329b4156dfafab7d5d26f622fa7835f27a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144
511390167276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270f8464e0274fe9b311de1ab931524e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce00f46dfc790742eb5b1ba5c98916aba95da8323ecca17fc80b4b215c872fab,PodSandboxId:b1076630eca6f3c871251e1767c4a977d7083129850e1a4fa05889e32ee96cf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733144225242177536,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29362ad5-79a7-4dac-acb5-1db92d62c9fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.303523775Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=caaa4dc3-2c90-4149-b5b2-c3844903fab1 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.303623812Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=caaa4dc3-2c90-4149-b5b2-c3844903fab1 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.304533663Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=076dd3b3-a4a5-4b4a-a5e6-9a3c642ac6f4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.304959695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145227304936163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=076dd3b3-a4a5-4b4a-a5e6-9a3c642ac6f4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.305504493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ce9dfd3-7dc8-4acc-9acc-ccaa964b15a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.305581369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ce9dfd3-7dc8-4acc-9acc-ccaa964b15a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.305831954Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c76c8542ddb2ce9e524530c6ad79d85c36c37c5d4119710797242d78cea690e,PodSandboxId:722462dbc74c301fa266d09c7ba590c167433a59d9b9c6912d0239c1a3338ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733144523742297137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8975d342-96fa-4173-b477-e25909ca76da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a33522f73a005d2628766c9032face2c7fa8861bc0c9d09ec979807351e9b2,PodSandboxId:976114a238b6804be12a7e2fa8070e45e1b21cd1182edec636f36738550adf1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523370291404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qfb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f41c48-90af-4524-98fc-22daf331fbcb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c38436dcda43a17cf61a8146b3c1d19f1a0ee1233489ecc29d5ff5c68fcd3e8,PodSandboxId:e31a2cd9e150c74180fb3121656a4ec47ed75c03625f9e84580488698f96d34f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523271914015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2stsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3cb9697b-974e-4f8e-9931-38fe3d971940,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a0bb9ef86b50ae744e65e5cd0a1a58f30aa67e7821baf187f638fd571cdafd,PodSandboxId:868c587c843e0c467fdc7a4a30aa1a348364c226b3dfe5a3d377b38c1aecb1c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733144522724749828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4vw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487da76d-2fae-4df0-b663-0cf128ae2911,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70644d4df6533d29560af5b8c39b457a48d4eb561c44a3cae2de93df9d2e95c,PodSandboxId:994247c1b27812d31d244c616f1dc451310ecfb18089e8125b9907cc2007ca1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173314451143922457
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5c277760a5f64606204d89db056873,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6650cc0efc8c6fac5cc33bc78272a3b147cb0d52f09a1f3c30c6bfff7810f03,PodSandboxId:4ba9e13405b6d88815f3548cdf171ece54ca9366355b0d5dd2f6eb4b0e475e08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144511436998108,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455f46ddd7a39b0e3d7b6679ac2233a36929bae665d35641d54404bd28617fd8,PodSandboxId:d60121c1e9a8e49d70b3cee0f6562ab3f2dbc4c5a7733b59363b9d45a591060a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144511370
135638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a43b5cbe13c8df408c11119c9d4af05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51b7d1118274da0fc4eca30648961649905ca73906bed87219bd3121ed7e762,PodSandboxId:fc1f450a1e85ee390ba9ad0b0008e329b4156dfafab7d5d26f622fa7835f27a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144
511390167276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270f8464e0274fe9b311de1ab931524e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce00f46dfc790742eb5b1ba5c98916aba95da8323ecca17fc80b4b215c872fab,PodSandboxId:b1076630eca6f3c871251e1767c4a977d7083129850e1a4fa05889e32ee96cf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733144225242177536,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ce9dfd3-7dc8-4acc-9acc-ccaa964b15a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.347031499Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba9d501d-681c-4c67-874f-0c65fcda0ec1 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.347169049Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba9d501d-681c-4c67-874f-0c65fcda0ec1 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.348876588Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64f3178f-3790-4282-8d73-c7cf7b66f9dc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.349566244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145227349528506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64f3178f-3790-4282-8d73-c7cf7b66f9dc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.350397498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2616487-03ff-40cc-819f-73f1dc2dcd39 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.350452135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2616487-03ff-40cc-819f-73f1dc2dcd39 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.350649184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c76c8542ddb2ce9e524530c6ad79d85c36c37c5d4119710797242d78cea690e,PodSandboxId:722462dbc74c301fa266d09c7ba590c167433a59d9b9c6912d0239c1a3338ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733144523742297137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8975d342-96fa-4173-b477-e25909ca76da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a33522f73a005d2628766c9032face2c7fa8861bc0c9d09ec979807351e9b2,PodSandboxId:976114a238b6804be12a7e2fa8070e45e1b21cd1182edec636f36738550adf1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523370291404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qfb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f41c48-90af-4524-98fc-22daf331fbcb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c38436dcda43a17cf61a8146b3c1d19f1a0ee1233489ecc29d5ff5c68fcd3e8,PodSandboxId:e31a2cd9e150c74180fb3121656a4ec47ed75c03625f9e84580488698f96d34f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523271914015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2stsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3cb9697b-974e-4f8e-9931-38fe3d971940,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a0bb9ef86b50ae744e65e5cd0a1a58f30aa67e7821baf187f638fd571cdafd,PodSandboxId:868c587c843e0c467fdc7a4a30aa1a348364c226b3dfe5a3d377b38c1aecb1c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733144522724749828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4vw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487da76d-2fae-4df0-b663-0cf128ae2911,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70644d4df6533d29560af5b8c39b457a48d4eb561c44a3cae2de93df9d2e95c,PodSandboxId:994247c1b27812d31d244c616f1dc451310ecfb18089e8125b9907cc2007ca1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173314451143922457
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5c277760a5f64606204d89db056873,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6650cc0efc8c6fac5cc33bc78272a3b147cb0d52f09a1f3c30c6bfff7810f03,PodSandboxId:4ba9e13405b6d88815f3548cdf171ece54ca9366355b0d5dd2f6eb4b0e475e08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144511436998108,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455f46ddd7a39b0e3d7b6679ac2233a36929bae665d35641d54404bd28617fd8,PodSandboxId:d60121c1e9a8e49d70b3cee0f6562ab3f2dbc4c5a7733b59363b9d45a591060a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144511370
135638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a43b5cbe13c8df408c11119c9d4af05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51b7d1118274da0fc4eca30648961649905ca73906bed87219bd3121ed7e762,PodSandboxId:fc1f450a1e85ee390ba9ad0b0008e329b4156dfafab7d5d26f622fa7835f27a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144
511390167276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270f8464e0274fe9b311de1ab931524e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce00f46dfc790742eb5b1ba5c98916aba95da8323ecca17fc80b4b215c872fab,PodSandboxId:b1076630eca6f3c871251e1767c4a977d7083129850e1a4fa05889e32ee96cf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733144225242177536,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2616487-03ff-40cc-819f-73f1dc2dcd39 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.388372716Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b8bf75e-6699-4dc8-bec3-1fc3bb6a0315 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.388464317Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b8bf75e-6699-4dc8-bec3-1fc3bb6a0315 name=/runtime.v1.RuntimeService/Version
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.389549953Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d83fe39f-0d42-49da-9f1a-e9dd9c6dc0ae name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.390140326Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145227390049193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d83fe39f-0d42-49da-9f1a-e9dd9c6dc0ae name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.390802046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2de7bac6-81f1-4fff-9a22-e7210d0c6781 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.390875888Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2de7bac6-81f1-4fff-9a22-e7210d0c6781 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 13:13:47 default-k8s-diff-port-653783 crio[680]: time="2024-12-02 13:13:47.391209511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c76c8542ddb2ce9e524530c6ad79d85c36c37c5d4119710797242d78cea690e,PodSandboxId:722462dbc74c301fa266d09c7ba590c167433a59d9b9c6912d0239c1a3338ac7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733144523742297137,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8975d342-96fa-4173-b477-e25909ca76da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9a33522f73a005d2628766c9032face2c7fa8861bc0c9d09ec979807351e9b2,PodSandboxId:976114a238b6804be12a7e2fa8070e45e1b21cd1182edec636f36738550adf1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523370291404,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2qfb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13f41c48-90af-4524-98fc-22daf331fbcb,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c38436dcda43a17cf61a8146b3c1d19f1a0ee1233489ecc29d5ff5c68fcd3e8,PodSandboxId:e31a2cd9e150c74180fb3121656a4ec47ed75c03625f9e84580488698f96d34f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733144523271914015,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2stsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3cb9697b-974e-4f8e-9931-38fe3d971940,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77a0bb9ef86b50ae744e65e5cd0a1a58f30aa67e7821baf187f638fd571cdafd,PodSandboxId:868c587c843e0c467fdc7a4a30aa1a348364c226b3dfe5a3d377b38c1aecb1c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733144522724749828,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d4vw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487da76d-2fae-4df0-b663-0cf128ae2911,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d70644d4df6533d29560af5b8c39b457a48d4eb561c44a3cae2de93df9d2e95c,PodSandboxId:994247c1b27812d31d244c616f1dc451310ecfb18089e8125b9907cc2007ca1a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173314451143922457
1,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf5c277760a5f64606204d89db056873,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6650cc0efc8c6fac5cc33bc78272a3b147cb0d52f09a1f3c30c6bfff7810f03,PodSandboxId:4ba9e13405b6d88815f3548cdf171ece54ca9366355b0d5dd2f6eb4b0e475e08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733144511436998108,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:455f46ddd7a39b0e3d7b6679ac2233a36929bae665d35641d54404bd28617fd8,PodSandboxId:d60121c1e9a8e49d70b3cee0f6562ab3f2dbc4c5a7733b59363b9d45a591060a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733144511370
135638,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a43b5cbe13c8df408c11119c9d4af05,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c51b7d1118274da0fc4eca30648961649905ca73906bed87219bd3121ed7e762,PodSandboxId:fc1f450a1e85ee390ba9ad0b0008e329b4156dfafab7d5d26f622fa7835f27a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733144
511390167276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270f8464e0274fe9b311de1ab931524e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce00f46dfc790742eb5b1ba5c98916aba95da8323ecca17fc80b4b215c872fab,PodSandboxId:b1076630eca6f3c871251e1767c4a977d7083129850e1a4fa05889e32ee96cf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733144225242177536,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0a5f9e2682f67fac1de53a495d621b8,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2de7bac6-81f1-4fff-9a22-e7210d0c6781 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2c76c8542ddb2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 minutes ago      Running             storage-provisioner       0                   722462dbc74c3       storage-provisioner
	e9a33522f73a0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 minutes ago      Running             coredns                   0                   976114a238b68       coredns-7c65d6cfc9-2qfb5
	5c38436dcda43       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 minutes ago      Running             coredns                   0                   e31a2cd9e150c       coredns-7c65d6cfc9-2stsx
	77a0bb9ef86b5       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   11 minutes ago      Running             kube-proxy                0                   868c587c843e0       kube-proxy-d4vw4
	d70644d4df653       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   11 minutes ago      Running             kube-scheduler            2                   994247c1b2781       kube-scheduler-default-k8s-diff-port-653783
	d6650cc0efc8c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   11 minutes ago      Running             kube-apiserver            2                   4ba9e13405b6d       kube-apiserver-default-k8s-diff-port-653783
	c51b7d1118274       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   11 minutes ago      Running             etcd                      2                   fc1f450a1e85e       etcd-default-k8s-diff-port-653783
	455f46ddd7a39       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   11 minutes ago      Running             kube-controller-manager   2                   d60121c1e9a8e       kube-controller-manager-default-k8s-diff-port-653783
	ce00f46dfc790       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   16 minutes ago      Exited              kube-apiserver            1                   b1076630eca6f       kube-apiserver-default-k8s-diff-port-653783
	
	
	==> coredns [5c38436dcda43a17cf61a8146b3c1d19f1a0ee1233489ecc29d5ff5c68fcd3e8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [e9a33522f73a005d2628766c9032face2c7fa8861bc0c9d09ec979807351e9b2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-653783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-653783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=default-k8s-diff-port-653783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T13_01_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 13:01:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-653783
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 13:13:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 13:12:20 +0000   Mon, 02 Dec 2024 13:01:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 13:12:20 +0000   Mon, 02 Dec 2024 13:01:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 13:12:20 +0000   Mon, 02 Dec 2024 13:01:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 13:12:20 +0000   Mon, 02 Dec 2024 13:01:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.154
	  Hostname:    default-k8s-diff-port-653783
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2de64d5b8faf484ea5614ce3e9ffb71c
	  System UUID:                2de64d5b-8faf-484e-a561-4ce3e9ffb71c
	  Boot ID:                    ec4d4298-8c0e-4b7b-a674-67477f56d4bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-2qfb5                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 coredns-7c65d6cfc9-2stsx                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     11m
	  kube-system                 etcd-default-k8s-diff-port-653783                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kube-apiserver-default-k8s-diff-port-653783             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-653783    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-d4vw4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-default-k8s-diff-port-653783             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-6867b74b74-tcr8r                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         11m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node default-k8s-diff-port-653783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node default-k8s-diff-port-653783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node default-k8s-diff-port-653783 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node default-k8s-diff-port-653783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node default-k8s-diff-port-653783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node default-k8s-diff-port-653783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node default-k8s-diff-port-653783 event: Registered Node default-k8s-diff-port-653783 in Controller
	
	
	==> dmesg <==
	[  +0.052605] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040870] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.955490] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.797235] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.618162] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.143698] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.057043] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062353] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.193578] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.111646] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.281603] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[Dec 2 12:57] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +1.945598] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.061287] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.542295] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.227070] kauditd_printk_skb: 85 callbacks suppressed
	[Dec 2 13:01] systemd-fstab-generator[2577]: Ignoring "noauto" option for root device
	[  +0.060123] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.998503] systemd-fstab-generator[2898]: Ignoring "noauto" option for root device
	[  +0.077285] kauditd_printk_skb: 54 callbacks suppressed
	[Dec 2 13:02] systemd-fstab-generator[3014]: Ignoring "noauto" option for root device
	[  +0.096968] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.974270] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [c51b7d1118274da0fc4eca30648961649905ca73906bed87219bd3121ed7e762] <==
	{"level":"info","ts":"2024-12-02T13:01:52.618476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-02T13:01:52.618530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-02T13:01:52.618561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgPreVoteResp from 10fb7b0a157fc334 at term 1"}
	{"level":"info","ts":"2024-12-02T13:01:52.618574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became candidate at term 2"}
	{"level":"info","ts":"2024-12-02T13:01:52.618579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgVoteResp from 10fb7b0a157fc334 at term 2"}
	{"level":"info","ts":"2024-12-02T13:01:52.618588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became leader at term 2"}
	{"level":"info","ts":"2024-12-02T13:01:52.618595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 10fb7b0a157fc334 elected leader 10fb7b0a157fc334 at term 2"}
	{"level":"info","ts":"2024-12-02T13:01:52.621548Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"10fb7b0a157fc334","local-member-attributes":"{Name:default-k8s-diff-port-653783 ClientURLs:[https://192.168.39.154:2379]}","request-path":"/0/members/10fb7b0a157fc334/attributes","cluster-id":"bd4b2769e12dd4ff","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-02T13:01:52.621672Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T13:01:52.622570Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-02T13:01:52.622681Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T13:01:52.624263Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-02T13:01:52.625972Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-02T13:01:52.628350Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-02T13:01:52.629527Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.154:2379"}
	{"level":"info","ts":"2024-12-02T13:01:52.630034Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-02T13:01:52.630104Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-02T13:01:52.631463Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd4b2769e12dd4ff","local-member-id":"10fb7b0a157fc334","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T13:01:52.631553Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T13:01:52.631607Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-02T13:10:04.490480Z","caller":"traceutil/trace.go:171","msg":"trace[468386243] transaction","detail":"{read_only:false; response_revision:878; number_of_response:1; }","duration":"182.773931ms","start":"2024-12-02T13:10:04.307635Z","end":"2024-12-02T13:10:04.490409Z","steps":["trace[468386243] 'process raft request'  (duration: 182.234734ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T13:11:52.664313Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":722}
	{"level":"info","ts":"2024-12-02T13:11:52.682610Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":722,"took":"17.88378ms","hash":1557660448,"current-db-size-bytes":2154496,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2154496,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-12-02T13:11:52.682701Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1557660448,"revision":722,"compact-revision":-1}
	{"level":"info","ts":"2024-12-02T13:12:19.693815Z","caller":"traceutil/trace.go:171","msg":"trace[1272855662] transaction","detail":"{read_only:false; response_revision:989; number_of_response:1; }","duration":"166.290126ms","start":"2024-12-02T13:12:19.527477Z","end":"2024-12-02T13:12:19.693767Z","steps":["trace[1272855662] 'process raft request'  (duration: 166.149981ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:13:47 up 17 min,  0 users,  load average: 0.04, 0.10, 0.09
	Linux default-k8s-diff-port-653783 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ce00f46dfc790742eb5b1ba5c98916aba95da8323ecca17fc80b4b215c872fab] <==
	W1202 13:01:45.209744       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.234837       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.253874       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.294157       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.374726       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.382233       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.415132       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.427885       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.430346       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.463857       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.468212       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.496627       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.506018       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.545623       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.557520       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.681750       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.682971       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.733979       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.738452       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.757818       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.823625       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.883666       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.944272       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:45.988445       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1202 13:01:46.287477       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d6650cc0efc8c6fac5cc33bc78272a3b147cb0d52f09a1f3c30c6bfff7810f03] <==
	I1202 13:09:55.092432       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:09:55.092552       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 13:11:54.090140       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:11:54.090321       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1202 13:11:55.092629       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:11:55.092705       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1202 13:11:55.092800       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:11:55.093002       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 13:11:55.093908       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:11:55.095001       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1202 13:12:55.094645       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:12:55.095036       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1202 13:12:55.095252       1 handler_proxy.go:99] no RequestInfo found in the context
	E1202 13:12:55.095372       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1202 13:12:55.096940       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1202 13:12:55.096991       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [455f46ddd7a39b0e3d7b6679ac2233a36929bae665d35641d54404bd28617fd8] <==
	I1202 13:08:31.569911       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:08:32.693342       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="65.544µs"
	E1202 13:09:01.141800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:09:01.577671       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:09:31.148824       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:09:31.586964       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:10:01.158449       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:10:01.596910       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:10:31.167049       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:10:31.606269       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:11:01.173759       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:11:01.616713       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:11:31.180310       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:11:31.627041       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:12:01.186988       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:12:01.635405       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:12:20.598591       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-653783"
	E1202 13:12:31.194328       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:12:31.642584       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1202 13:13:01.202226       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:13:01.653738       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:13:19.701642       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="334.418µs"
	E1202 13:13:31.207938       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1202 13:13:31.661461       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1202 13:13:31.691449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="69.639µs"
	
	
	==> kube-proxy [77a0bb9ef86b50ae744e65e5cd0a1a58f30aa67e7821baf187f638fd571cdafd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1202 13:02:03.514300       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1202 13:02:03.591048       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
	E1202 13:02:03.591200       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 13:02:03.831236       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1202 13:02:03.831267       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 13:02:03.831299       1 server_linux.go:169] "Using iptables Proxier"
	I1202 13:02:03.859729       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 13:02:03.859953       1 server.go:483] "Version info" version="v1.31.2"
	I1202 13:02:03.859983       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 13:02:03.868932       1 config.go:199] "Starting service config controller"
	I1202 13:02:03.868978       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 13:02:03.869010       1 config.go:105] "Starting endpoint slice config controller"
	I1202 13:02:03.869047       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 13:02:03.876311       1 config.go:328] "Starting node config controller"
	I1202 13:02:03.897871       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 13:02:03.897902       1 shared_informer.go:320] Caches are synced for node config
	I1202 13:02:03.969159       1 shared_informer.go:320] Caches are synced for service config
	I1202 13:02:03.969358       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d70644d4df6533d29560af5b8c39b457a48d4eb561c44a3cae2de93df9d2e95c] <==
	W1202 13:01:54.109298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 13:01:54.109325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:54.109143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 13:01:54.109474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:54.109481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1202 13:01:54.109624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:54.986650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1202 13:01:54.986713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.036119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1202 13:01:55.036168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.080889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 13:01:55.080960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.103271       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 13:01:55.103500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.105620       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1202 13:01:55.105722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.187277       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1202 13:01:55.187327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.367476       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 13:01:55.367526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.391625       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1202 13:01:55.391943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 13:01:55.575705       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1202 13:01:55.575838       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1202 13:01:57.501696       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 13:12:54 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:12:54.676489    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tcr8r" podUID="2f017719-26ad-44ca-a44a-e6c20cd6438c"
	Dec 02 13:12:56 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:12:56.712198    2905 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 02 13:12:56 default-k8s-diff-port-653783 kubelet[2905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 02 13:12:56 default-k8s-diff-port-653783 kubelet[2905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 02 13:12:56 default-k8s-diff-port-653783 kubelet[2905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 02 13:12:56 default-k8s-diff-port-653783 kubelet[2905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 02 13:12:56 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:12:56.839548    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145176839222662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:12:56 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:12:56.839599    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145176839222662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:13:05 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:05.691145    2905 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 02 13:13:05 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:05.691267    2905 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 02 13:13:05 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:05.691522    2905 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q857h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-tcr8r_kube-system(2f017719-26ad-44ca-a44a-e6c20cd6438c): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 02 13:13:05 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:05.694357    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-tcr8r" podUID="2f017719-26ad-44ca-a44a-e6c20cd6438c"
	Dec 02 13:13:06 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:06.841663    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145186841362911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:13:06 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:06.841705    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145186841362911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:13:16 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:16.844357    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145196843833220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:13:16 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:16.844869    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145196843833220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:13:19 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:19.676185    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tcr8r" podUID="2f017719-26ad-44ca-a44a-e6c20cd6438c"
	Dec 02 13:13:26 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:26.846153    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145206845658654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:13:26 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:26.846653    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145206845658654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:13:31 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:31.675821    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tcr8r" podUID="2f017719-26ad-44ca-a44a-e6c20cd6438c"
	Dec 02 13:13:36 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:36.848786    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145216848351812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:13:36 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:36.849164    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145216848351812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:13:44 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:44.677829    2905 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-tcr8r" podUID="2f017719-26ad-44ca-a44a-e6c20cd6438c"
	Dec 02 13:13:46 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:46.860168    2905 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145226859257520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 13:13:46 default-k8s-diff-port-653783 kubelet[2905]: E1202 13:13:46.860232    2905 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733145226859257520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2c76c8542ddb2ce9e524530c6ad79d85c36c37c5d4119710797242d78cea690e] <==
	I1202 13:02:03.891573       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 13:02:03.914890       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 13:02:03.916373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 13:02:03.927888       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 13:02:03.928228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-653783_584bcb35-8d00-4e7c-beee-83c26aae3904!
	I1202 13:02:03.932614       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0476abe9-2cc6-4fcf-b524-3c8b10aeda4c", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-653783_584bcb35-8d00-4e7c-beee-83c26aae3904 became leader
	I1202 13:02:04.029476       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-653783_584bcb35-8d00-4e7c-beee-83c26aae3904!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-653783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-tcr8r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-653783 describe pod metrics-server-6867b74b74-tcr8r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-653783 describe pod metrics-server-6867b74b74-tcr8r: exit status 1 (96.161086ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-tcr8r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-653783 describe pod metrics-server-6867b74b74-tcr8r: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (155.66s)

                                                
                                    

Test pass (249/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.3
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.2/json-events 4.45
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.12
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.58
22 TestOffline 78.51
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 131.18
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 10.49
35 TestAddons/parallel/Registry 54.92
37 TestAddons/parallel/InspektorGadget 11.74
40 TestAddons/parallel/CSI 38.58
41 TestAddons/parallel/Headlamp 17.94
42 TestAddons/parallel/CloudSpanner 5.64
43 TestAddons/parallel/LocalPath 11.2
44 TestAddons/parallel/NvidiaDevicePlugin 6.53
45 TestAddons/parallel/Yakd 12.04
48 TestCertOptions 44.32
49 TestCertExpiration 271.22
51 TestForceSystemdFlag 68.68
52 TestForceSystemdEnv 43.93
54 TestKVMDriverInstallOrUpdate 1.16
58 TestErrorSpam/setup 45.26
59 TestErrorSpam/start 0.32
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.53
62 TestErrorSpam/unpause 1.7
63 TestErrorSpam/stop 5.29
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 52.56
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 54.6
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.25
75 TestFunctional/serial/CacheCmd/cache/add_local 1.03
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
77 TestFunctional/serial/CacheCmd/cache/list 0.04
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
81 TestFunctional/serial/MinikubeKubectlCmd 0.1
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 30.64
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.36
86 TestFunctional/serial/LogsFileCmd 1.36
87 TestFunctional/serial/InvalidService 4.18
89 TestFunctional/parallel/ConfigCmd 0.36
90 TestFunctional/parallel/DashboardCmd 12.46
91 TestFunctional/parallel/DryRun 0.3
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 1.27
97 TestFunctional/parallel/ServiceCmdConnect 7.54
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 35.18
101 TestFunctional/parallel/SSHCmd 0.45
102 TestFunctional/parallel/CpCmd 1.48
103 TestFunctional/parallel/MySQL 21.98
104 TestFunctional/parallel/FileSync 0.19
105 TestFunctional/parallel/CertSync 1.25
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
113 TestFunctional/parallel/License 0.22
114 TestFunctional/parallel/ServiceCmd/DeployApp 12.2
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
116 TestFunctional/parallel/MountCmd/any-port 15.66
117 TestFunctional/parallel/ProfileCmd/profile_list 0.34
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
119 TestFunctional/parallel/Version/short 0.13
120 TestFunctional/parallel/Version/components 0.87
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
126 TestFunctional/parallel/ImageCommands/Setup 0.39
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.99
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.1
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.74
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.2
133 TestFunctional/parallel/ServiceCmd/List 0.52
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.77
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
137 TestFunctional/parallel/ServiceCmd/Format 0.34
147 TestFunctional/parallel/ServiceCmd/URL 0.33
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
151 TestFunctional/parallel/MountCmd/specific-port 1.85
152 TestFunctional/parallel/MountCmd/VerifyCleanup 0.91
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 196
160 TestMultiControlPlane/serial/DeployApp 5.88
161 TestMultiControlPlane/serial/PingHostFromPods 1.15
162 TestMultiControlPlane/serial/AddWorkerNode 56.56
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
165 TestMultiControlPlane/serial/CopyFile 12.51
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.47
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.61
174 TestMultiControlPlane/serial/RestartCluster 354.46
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
176 TestMultiControlPlane/serial/AddSecondaryNode 79.62
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
181 TestJSONOutput/start/Command 52.26
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.74
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.61
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.35
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.18
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 88.93
213 TestMountStart/serial/StartWithMountFirst 32.04
214 TestMountStart/serial/VerifyMountFirst 0.36
215 TestMountStart/serial/StartWithMountSecond 26.61
216 TestMountStart/serial/VerifyMountSecond 0.36
217 TestMountStart/serial/DeleteFirst 0.67
218 TestMountStart/serial/VerifyMountPostDelete 0.37
219 TestMountStart/serial/Stop 1.3
220 TestMountStart/serial/RestartStopped 21.7
221 TestMountStart/serial/VerifyMountPostStop 0.36
224 TestMultiNode/serial/FreshStart2Nodes 110.75
225 TestMultiNode/serial/DeployApp2Nodes 4.79
226 TestMultiNode/serial/PingHostFrom2Pods 0.75
227 TestMultiNode/serial/AddNode 49.43
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.58
230 TestMultiNode/serial/CopyFile 6.95
231 TestMultiNode/serial/StopNode 2.34
232 TestMultiNode/serial/StartAfterStop 37.27
234 TestMultiNode/serial/DeleteNode 2.04
236 TestMultiNode/serial/RestartMultiNode 175.82
237 TestMultiNode/serial/ValidateNameConflict 43.92
244 TestScheduledStopUnix 118.45
248 TestRunningBinaryUpgrade 226.31
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
254 TestNoKubernetes/serial/StartWithK8s 91.25
263 TestPause/serial/Start 110.98
264 TestNoKubernetes/serial/StartWithStopK8s 68.78
265 TestPause/serial/SecondStartNoReconfiguration 41.39
266 TestNoKubernetes/serial/Start 31.99
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
268 TestNoKubernetes/serial/ProfileList 23.24
269 TestPause/serial/Pause 0.64
270 TestPause/serial/VerifyStatus 0.23
271 TestPause/serial/Unpause 0.62
272 TestPause/serial/PauseAgain 0.84
273 TestPause/serial/DeletePaused 0.88
274 TestPause/serial/VerifyDeletedResources 13.01
275 TestStoppedBinaryUpgrade/Setup 0.66
276 TestStoppedBinaryUpgrade/Upgrade 135.05
277 TestNoKubernetes/serial/Stop 1.32
278 TestNoKubernetes/serial/StartNoArgs 39.54
286 TestNetworkPlugins/group/false 2.75
290 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
295 TestStartStop/group/no-preload/serial/FirstStart 71.93
296 TestStartStop/group/no-preload/serial/DeployApp 9.29
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.92
300 TestStartStop/group/embed-certs/serial/FirstStart 61.14
302 TestStartStop/group/newest-cni/serial/FirstStart 52.2
303 TestStartStop/group/embed-certs/serial/DeployApp 10.3
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
306 TestStartStop/group/newest-cni/serial/DeployApp 0
307 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
308 TestStartStop/group/newest-cni/serial/Stop 7.34
309 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
310 TestStartStop/group/newest-cni/serial/SecondStart 37.12
312 TestStartStop/group/no-preload/serial/SecondStart 652.04
313 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
314 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
316 TestStartStop/group/newest-cni/serial/Pause 2.33
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 326.38
322 TestStartStop/group/embed-certs/serial/SecondStart 533.63
323 TestStartStop/group/old-k8s-version/serial/Stop 6.28
324 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.9
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 616.57
338 TestNetworkPlugins/group/auto/Start 80.82
339 TestNetworkPlugins/group/kindnet/Start 70.71
340 TestNetworkPlugins/group/auto/KubeletFlags 0.19
341 TestNetworkPlugins/group/auto/NetCatPod 10.24
342 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
343 TestNetworkPlugins/group/auto/DNS 0.16
344 TestNetworkPlugins/group/auto/Localhost 0.17
345 TestNetworkPlugins/group/auto/HairPin 0.12
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
347 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
349 TestNetworkPlugins/group/calico/Start 77.33
350 TestNetworkPlugins/group/kindnet/DNS 0.17
351 TestNetworkPlugins/group/kindnet/Localhost 0.14
352 TestNetworkPlugins/group/kindnet/HairPin 0.18
353 TestNetworkPlugins/group/custom-flannel/Start 81.63
354 TestNetworkPlugins/group/enable-default-cni/Start 120.9
355 TestNetworkPlugins/group/calico/ControllerPod 6.01
356 TestNetworkPlugins/group/calico/KubeletFlags 0.26
357 TestNetworkPlugins/group/calico/NetCatPod 12.29
358 TestNetworkPlugins/group/calico/DNS 0.18
359 TestNetworkPlugins/group/calico/Localhost 0.14
360 TestNetworkPlugins/group/calico/HairPin 0.14
361 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
362 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.22
363 TestNetworkPlugins/group/custom-flannel/DNS 0.17
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
366 TestNetworkPlugins/group/flannel/Start 69.74
367 TestNetworkPlugins/group/bridge/Start 98.92
368 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
369 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.28
370 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
371 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
372 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
373 TestNetworkPlugins/group/flannel/ControllerPod 6.01
374 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
375 TestNetworkPlugins/group/flannel/NetCatPod 11.23
376 TestNetworkPlugins/group/flannel/DNS 0.15
377 TestNetworkPlugins/group/flannel/Localhost 0.11
378 TestNetworkPlugins/group/flannel/HairPin 0.12
379 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
380 TestNetworkPlugins/group/bridge/NetCatPod 11.2
381 TestNetworkPlugins/group/bridge/DNS 0.16
382 TestNetworkPlugins/group/bridge/Localhost 0.13
383 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (10.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-407914 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-407914 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.295927328s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1202 11:30:31.500709   13416 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1202 11:30:31.500813   13416 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-407914
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-407914: exit status 85 (56.140014ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-407914 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |          |
	|         | -p download-only-407914        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:30:21
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:30:21.243300   13428 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:30:21.243398   13428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:21.243407   13428 out.go:358] Setting ErrFile to fd 2...
	I1202 11:30:21.243411   13428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:21.243560   13428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	W1202 11:30:21.243663   13428 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20033-6257/.minikube/config/config.json: open /home/jenkins/minikube-integration/20033-6257/.minikube/config/config.json: no such file or directory
	I1202 11:30:21.244211   13428 out.go:352] Setting JSON to true
	I1202 11:30:21.245060   13428 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":773,"bootTime":1733138248,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:30:21.245155   13428 start.go:139] virtualization: kvm guest
	I1202 11:30:21.247181   13428 out.go:97] [download-only-407914] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1202 11:30:21.247272   13428 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball: no such file or directory
	I1202 11:30:21.247309   13428 notify.go:220] Checking for updates...
	I1202 11:30:21.248322   13428 out.go:169] MINIKUBE_LOCATION=20033
	I1202 11:30:21.249362   13428 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:30:21.250258   13428 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:30:21.251110   13428 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:30:21.252037   13428 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 11:30:21.253800   13428 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 11:30:21.253969   13428 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:30:21.347954   13428 out.go:97] Using the kvm2 driver based on user configuration
	I1202 11:30:21.347976   13428 start.go:297] selected driver: kvm2
	I1202 11:30:21.347982   13428 start.go:901] validating driver "kvm2" against <nil>
	I1202 11:30:21.348313   13428 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:30:21.348439   13428 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1202 11:30:21.362642   13428 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1202 11:30:21.362672   13428 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 11:30:21.363206   13428 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1202 11:30:21.363348   13428 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 11:30:21.363381   13428 cni.go:84] Creating CNI manager for ""
	I1202 11:30:21.363428   13428 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 11:30:21.363438   13428 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1202 11:30:21.363491   13428 start.go:340] cluster config:
	{Name:download-only-407914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-407914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:30:21.363651   13428 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:30:21.365076   13428 out.go:97] Downloading VM boot image ...
	I1202 11:30:21.365102   13428 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1202 11:30:24.910045   13428 out.go:97] Starting "download-only-407914" primary control-plane node in "download-only-407914" cluster
	I1202 11:30:24.910069   13428 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1202 11:30:24.934496   13428 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1202 11:30:24.934525   13428 cache.go:56] Caching tarball of preloaded images
	I1202 11:30:24.934656   13428 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1202 11:30:24.936204   13428 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1202 11:30:24.936216   13428 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1202 11:30:24.962531   13428 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1202 11:30:30.138464   13428 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1202 11:30:30.138555   13428 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1202 11:30:31.040193   13428 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1202 11:30:31.040532   13428 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/download-only-407914/config.json ...
	I1202 11:30:31.040559   13428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/download-only-407914/config.json: {Name:mk47f1d994b98344fa76b0b7af146f3760dde7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:30:31.040711   13428 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1202 11:30:31.040914   13428 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20033-6257/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-407914 host does not exist
	  To start a cluster, run: "minikube start -p download-only-407914"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-407914
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (4.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-257770 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-257770 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.449269119s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (4.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1202 11:30:36.256863   13416 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1202 11:30:36.256909   13416 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-257770
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-257770: exit status 85 (56.656688ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-407914 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | -p download-only-407914        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| delete  | -p download-only-407914        | download-only-407914 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| start   | -o=json --download-only        | download-only-257770 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | -p download-only-257770        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:30:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:30:31.845552   13635 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:30:31.845635   13635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:31.845640   13635 out.go:358] Setting ErrFile to fd 2...
	I1202 11:30:31.845644   13635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:31.845802   13635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:30:31.846298   13635 out.go:352] Setting JSON to true
	I1202 11:30:31.847052   13635 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":784,"bootTime":1733138248,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:30:31.847139   13635 start.go:139] virtualization: kvm guest
	I1202 11:30:31.849017   13635 out.go:97] [download-only-257770] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:30:31.849149   13635 notify.go:220] Checking for updates...
	I1202 11:30:31.850277   13635 out.go:169] MINIKUBE_LOCATION=20033
	I1202 11:30:31.851649   13635 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:30:31.852773   13635 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:30:31.853936   13635 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:30:31.855142   13635 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-257770 host does not exist
	  To start a cluster, run: "minikube start -p download-only-257770"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-257770
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I1202 11:30:36.786864   13416 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-408241 --alsologtostderr --binary-mirror http://127.0.0.1:43999 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-408241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-408241
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (78.51s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-394557 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-394557 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m17.475909869s)
helpers_test.go:175: Cleaning up "offline-crio-394557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-394557
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-394557: (1.035545799s)
--- PASS: TestOffline (78.51s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-093588
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-093588: exit status 85 (49.441814ms)

                                                
                                                
-- stdout --
	* Profile "addons-093588" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-093588"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-093588
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-093588: exit status 85 (48.356495ms)

                                                
                                                
-- stdout --
	* Profile "addons-093588" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-093588"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (131.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-093588 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-093588 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m11.182959022s)
--- PASS: TestAddons/Setup (131.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-093588 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-093588 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-093588 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-093588 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9f6e4744-0d79-497c-83f9-2119471a0df3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9f6e4744-0d79-497c-83f9-2119471a0df3] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003153491s
addons_test.go:633: (dbg) Run:  kubectl --context addons-093588 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-093588 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-093588 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (54.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.273601ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-4dmpv" [4ba754ca-3bc4-4639-bbf2-9d771c422d1f] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00243174s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-84nx4" [d2473044-c394-4b78-8583-763661c9c329] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003987629s
addons_test.go:331: (dbg) Run:  kubectl --context addons-093588 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-093588 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-093588 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.751247083s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 ip
2024/12/02 11:33:35 [DEBUG] GET http://192.168.39.203:5000
2024/12/02 11:33:35 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:33:35 [DEBUG] GET http://192.168.39.203:5000: retrying in 1s (4 left)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (54.92s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4865q" [6c5b25c1-d459-4d56-9f5e-5c006a526a4f] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003719728s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable inspektor-gadget --alsologtostderr -v=1
2024/12/02 11:33:50 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
I1202 11:33:50.376959   13416 retry.go:31] will retry after 516.843231ms: GET http://192.168.39.203:5000 giving up after 5 attempt(s): Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:33:50 [DEBUG] GET http://192.168.39.203:5000
2024/12/02 11:33:50 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:33:50 [DEBUG] GET http://192.168.39.203:5000: retrying in 1s (4 left)
2024/12/02 11:33:51 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:33:51 [DEBUG] GET http://192.168.39.203:5000: retrying in 2s (3 left)
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-093588 addons disable inspektor-gadget --alsologtostderr -v=1: (5.732892917s)
--- PASS: TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1202 11:33:37.411442   13416 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1202 11:33:37.415819   13416 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1202 11:33:37.415841   13416 kapi.go:107] duration metric: took 4.41235ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.419377ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-093588 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-093588 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [13110a2c-9444-4cd0-9ef8-c0c77f252894] Pending
helpers_test.go:344: "task-pv-pod" [13110a2c-9444-4cd0-9ef8-c0c77f252894] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [13110a2c-9444-4cd0-9ef8-c0c77f252894] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.005036731s
addons_test.go:511: (dbg) Run:  kubectl --context addons-093588 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-093588 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
2024/12/02 11:33:53 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:33:53 [DEBUG] GET http://192.168.39.203:5000: retrying in 4s (2 left)
helpers_test.go:419: (dbg) Run:  kubectl --context addons-093588 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-093588 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-093588 delete pod task-pv-pod: (1.005572158s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-093588 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-093588 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
2024/12/02 11:33:57 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:33:57 [DEBUG] GET http://192.168.39.203:5000: retrying in 8s (1 left)
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-093588 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [179b2fd0-56c9-4e0e-8288-e66d73594712] Pending
helpers_test.go:344: "task-pv-pod-restore" [179b2fd0-56c9-4e0e-8288-e66d73594712] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [179b2fd0-56c9-4e0e-8288-e66d73594712] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.006093476s
addons_test.go:553: (dbg) Run:  kubectl --context addons-093588 delete pod task-pv-pod-restore
2024/12/02 11:34:07 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:34:07 [DEBUG] GET http://192.168.39.203:5000: retrying in 2s (3 left)
addons_test.go:557: (dbg) Run:  kubectl --context addons-093588 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-093588 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable csi-hostpath-driver --alsologtostderr -v=1
2024/12/02 11:34:09 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:34:09 [DEBUG] GET http://192.168.39.203:5000: retrying in 4s (2 left)
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-093588 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.81974553s)
--- PASS: TestAddons/parallel/CSI (38.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-093588 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-093588 --alsologtostderr -v=1: (1.171497911s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-hdqlv" [31f2cd66-abb0-4a6e-a516-61a3b7be66d4] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-hdqlv" [31f2cd66-abb0-4a6e-a516-61a3b7be66d4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-hdqlv" [31f2cd66-abb0-4a6e-a516-61a3b7be66d4] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004325643s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-093588 addons disable headlamp --alsologtostderr -v=1: (5.761302855s)
--- PASS: TestAddons/parallel/Headlamp (17.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-nmmfn" [1ce5c03b-3a72-4ae3-837d-a13e70b66ecf] Running
2024/12/02 11:33:38 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:33:38 [DEBUG] GET http://192.168.39.203:5000: retrying in 4s (2 left)
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004377338s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.2s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-093588 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-093588 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-093588 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b456b1a7-703a-4b58-aafb-7f49802b98ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b456b1a7-703a-4b58-aafb-7f49802b98ac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b456b1a7-703a-4b58-aafb-7f49802b98ac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003314654s
addons_test.go:906: (dbg) Run:  kubectl --context addons-093588 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 ssh "cat /opt/local-path-provisioner/pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-093588 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-093588 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.20s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zprhh" [1292e790-4f25-49e8-a26d-3925b308ef53] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004427821s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-hsrwm" [7fc2195a-6097-41e0-96c2-50824acad1ce] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004359936s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-093588 addons disable yakd --alsologtostderr -v=1: (6.038910129s)
--- PASS: TestAddons/parallel/Yakd (12.04s)

                                                
                                    
x
+
TestCertOptions (44.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-536755 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-536755 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (43.089589902s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-536755 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-536755 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-536755 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-536755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-536755
--- PASS: TestCertOptions (44.32s)

                                                
                                    
x
+
TestCertExpiration (271.22s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-424616 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1202 12:37:49.241052   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-424616 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (59.703347845s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-424616 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-424616 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (30.537293514s)
helpers_test.go:175: Cleaning up "cert-expiration-424616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-424616
--- PASS: TestCertExpiration (271.22s)

                                                
                                    
x
+
TestForceSystemdFlag (68.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-615809 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-615809 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.66311419s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-615809 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-615809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-615809
--- PASS: TestForceSystemdFlag (68.68s)

                                                
                                    
x
+
TestForceSystemdEnv (43.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-422851 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-422851 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.155546922s)
helpers_test.go:175: Cleaning up "force-systemd-env-422851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-422851
--- PASS: TestForceSystemdEnv (43.93s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.16s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1202 12:37:31.950041   13416 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 12:37:31.950187   13416 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1202 12:37:31.977277   13416 install.go:62] docker-machine-driver-kvm2: exit status 1
W1202 12:37:31.977538   13416 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1202 12:37:31.977596   13416 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1104380406/001/docker-machine-driver-kvm2
I1202 12:37:32.127740   13416 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1104380406/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc00068eca0 gz:0xc00068eca8 tar:0xc00068ec50 tar.bz2:0xc00068ec60 tar.gz:0xc00068ec70 tar.xz:0xc00068ec80 tar.zst:0xc00068ec90 tbz2:0xc00068ec60 tgz:0xc00068ec70 txz:0xc00068ec80 tzst:0xc00068ec90 xz:0xc00068ecb0 zip:0xc00068ecc0 zst:0xc00068ecb8] Getters:map[file:0xc001c4d870 http:0xc000788780 https:0xc0007887d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1202 12:37:32.127782   13416 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1104380406/001/docker-machine-driver-kvm2
I1202 12:37:32.677606   13416 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 12:37:32.677738   13416 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1202 12:37:32.705337   13416 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1202 12:37:32.705375   13416 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1202 12:37:32.705433   13416 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1202 12:37:32.705464   13416 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1104380406/002/docker-machine-driver-kvm2
I1202 12:37:32.729779   13416 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1104380406/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc00068eca0 gz:0xc00068eca8 tar:0xc00068ec50 tar.bz2:0xc00068ec60 tar.gz:0xc00068ec70 tar.xz:0xc00068ec80 tar.zst:0xc00068ec90 tbz2:0xc00068ec60 tgz:0xc00068ec70 txz:0xc00068ec80 tzst:0xc00068ec90 xz:0xc00068ecb0 zip:0xc00068ecc0 zst:0xc00068ecb8] Getters:map[file:0xc0014f6160 http:0xc00074a640 https:0xc00074a690] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1202 12:37:32.729826   13416 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1104380406/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.16s)

                                                
                                    
x
+
TestErrorSpam/setup (45.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-550556 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-550556 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-550556 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-550556 --driver=kvm2  --container-runtime=crio: (45.255961088s)
--- PASS: TestErrorSpam/setup (45.26s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (5.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 stop: (2.293580724s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 stop: (1.935426427s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-550556 --log_dir /tmp/nospam-550556 stop: (1.064161995s)
--- PASS: TestErrorSpam/stop (5.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20033-6257/.minikube/files/etc/test/nested/copy/13416/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-054639 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1202 11:42:49.244552   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:42:49.250952   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:42:49.262223   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:42:49.283598   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:42:49.324931   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:42:49.406304   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:42:49.567790   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:42:49.889435   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:42:50.531427   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:42:51.813110   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:42:54.374742   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:42:59.496645   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:09.738771   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-054639 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (52.562076785s)
--- PASS: TestFunctional/serial/StartWithProxy (52.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1202 11:43:22.306900   13416 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-054639 --alsologtostderr -v=8
E1202 11:43:30.220977   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:44:11.183167   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-054639 --alsologtostderr -v=8: (54.603287793s)
functional_test.go:663: soft start took 54.603951389s for "functional-054639" cluster.
I1202 11:44:16.910571   13416 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (54.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-054639 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-054639 cache add registry.k8s.io/pause:3.1: (1.065941689s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-054639 cache add registry.k8s.io/pause:3.3: (1.134373598s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-054639 cache add registry.k8s.io/pause:latest: (1.051371793s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-054639 /tmp/TestFunctionalserialCacheCmdcacheadd_local167021800/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 cache add minikube-local-cache-test:functional-054639
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 cache delete minikube-local-cache-test:functional-054639
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-054639
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-054639 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.162679ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 kubectl -- --context functional-054639 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-054639 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-054639 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-054639 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.634957249s)
functional_test.go:761: restart took 30.635079878s for "functional-054639" cluster.
I1202 11:44:54.211701   13416 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (30.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-054639 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-054639 logs: (1.362443988s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 logs --file /tmp/TestFunctionalserialLogsFileCmd4190368963/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-054639 logs --file /tmp/TestFunctionalserialLogsFileCmd4190368963/001/logs.txt: (1.357324636s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-054639 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-054639
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-054639: exit status 115 (270.819775ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.77:30601 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-054639 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-054639 config get cpus: exit status 14 (59.041373ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-054639 config get cpus: exit status 14 (51.90043ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-054639 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-054639 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21381: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-054639 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-054639 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.870688ms)

                                                
                                                
-- stdout --
	* [functional-054639] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 11:45:02.866304   21093 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:45:02.866394   21093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:02.866400   21093 out.go:358] Setting ErrFile to fd 2...
	I1202 11:45:02.866405   21093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:02.866601   21093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:45:02.867190   21093 out.go:352] Setting JSON to false
	I1202 11:45:02.868398   21093 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1655,"bootTime":1733138248,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:45:02.868478   21093 start.go:139] virtualization: kvm guest
	I1202 11:45:02.870261   21093 out.go:177] * [functional-054639] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:45:02.871742   21093 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:45:02.871742   21093 notify.go:220] Checking for updates...
	I1202 11:45:02.873991   21093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:45:02.875296   21093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:45:02.876778   21093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:02.877998   21093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:45:02.879125   21093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:45:02.881126   21093 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:45:02.881752   21093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:45:02.881830   21093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:45:02.900813   21093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43809
	I1202 11:45:02.901327   21093 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:45:02.901860   21093 main.go:141] libmachine: Using API Version  1
	I1202 11:45:02.901888   21093 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:45:02.902288   21093 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:45:02.902470   21093 main.go:141] libmachine: (functional-054639) Calling .DriverName
	I1202 11:45:02.902727   21093 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:45:02.903129   21093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:45:02.903163   21093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:45:02.918503   21093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I1202 11:45:02.918839   21093 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:45:02.919294   21093 main.go:141] libmachine: Using API Version  1
	I1202 11:45:02.919314   21093 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:45:02.919556   21093 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:45:02.919709   21093 main.go:141] libmachine: (functional-054639) Calling .DriverName
	I1202 11:45:02.950256   21093 out.go:177] * Using the kvm2 driver based on existing profile
	I1202 11:45:02.951464   21093 start.go:297] selected driver: kvm2
	I1202 11:45:02.951476   21093 start.go:901] validating driver "kvm2" against &{Name:functional-054639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-054639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:45:02.951600   21093 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:45:02.953444   21093 out.go:201] 
	W1202 11:45:02.954499   21093 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 11:45:02.955532   21093 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-054639 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-054639 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-054639 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.507318ms)

                                                
                                                
-- stdout --
	* [functional-054639] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 11:45:02.708196   21038 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:45:02.708790   21038 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:02.708799   21038 out.go:358] Setting ErrFile to fd 2...
	I1202 11:45:02.708803   21038 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:02.709052   21038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 11:45:02.709533   21038 out.go:352] Setting JSON to false
	I1202 11:45:02.710399   21038 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1655,"bootTime":1733138248,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:45:02.710481   21038 start.go:139] virtualization: kvm guest
	I1202 11:45:02.712086   21038 out.go:177] * [functional-054639] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1202 11:45:02.713477   21038 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:45:02.713480   21038 notify.go:220] Checking for updates...
	I1202 11:45:02.719459   21038 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:45:02.720635   21038 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 11:45:02.721740   21038 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 11:45:02.722959   21038 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:45:02.724014   21038 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:45:02.725541   21038 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:45:02.726142   21038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:45:02.726212   21038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:45:02.746094   21038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36085
	I1202 11:45:02.746586   21038 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:45:02.747269   21038 main.go:141] libmachine: Using API Version  1
	I1202 11:45:02.747287   21038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:45:02.747647   21038 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:45:02.747817   21038 main.go:141] libmachine: (functional-054639) Calling .DriverName
	I1202 11:45:02.748077   21038 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:45:02.748506   21038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 11:45:02.748544   21038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 11:45:02.765382   21038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38043
	I1202 11:45:02.765897   21038 main.go:141] libmachine: () Calling .GetVersion
	I1202 11:45:02.766451   21038 main.go:141] libmachine: Using API Version  1
	I1202 11:45:02.766468   21038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 11:45:02.766836   21038 main.go:141] libmachine: () Calling .GetMachineName
	I1202 11:45:02.767033   21038 main.go:141] libmachine: (functional-054639) Calling .DriverName
	I1202 11:45:02.801349   21038 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1202 11:45:02.802719   21038 start.go:297] selected driver: kvm2
	I1202 11:45:02.802734   21038 start.go:901] validating driver "kvm2" against &{Name:functional-054639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-054639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:45:02.802860   21038 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:45:02.804964   21038 out.go:201] 
	W1202 11:45:02.806041   21038 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 11:45:02.807237   21038 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-054639 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-054639 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5dztv" [d0aebbc2-9cd0-4431-8f14-9745b27e821c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5dztv" [d0aebbc2-9cd0-4431-8f14-9745b27e821c] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.002972185s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.77:32754
functional_test.go:1675: http://192.168.39.77:32754: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-5dztv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.77:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.77:32754
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a460b31d-c594-4633-ae5b-d06a28a5016f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.047961664s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-054639 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-054639 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-054639 get pvc myclaim -o=json
I1202 11:45:20.618391   13416 retry.go:31] will retry after 1.132453134s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:09ec0497-baa7-429b-8401-fe5fb5a3cf98 ResourceVersion:913 Generation:0 CreationTimestamp:2024-12-02 11:45:20 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-09ec0497-baa7-429b-8401-fe5fb5a3cf98 StorageClassName:0xc001ca6520 VolumeMode:0xc001ca6530 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-054639 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-054639 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [84bca661-142a-466c-a223-ea665378b729] Pending
helpers_test.go:344: "sp-pod" [84bca661-142a-466c-a223-ea665378b729] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [84bca661-142a-466c-a223-ea665378b729] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.003445396s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-054639 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-054639 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-054639 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1c2a1516-5d10-4d79-bea0-757ad66d7a4d] Pending
helpers_test.go:344: "sp-pod" [1c2a1516-5d10-4d79-bea0-757ad66d7a4d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003448538s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-054639 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh -n functional-054639 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 cp functional-054639:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2573061740/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh -n functional-054639 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh -n functional-054639 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-054639 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-gbvlf" [ebf0457e-b523-48fe-b8f1-7859fa7553ce] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-gbvlf" [ebf0457e-b523-48fe-b8f1-7859fa7553ce] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.004151526s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-054639 exec mysql-6cdb49bbb-gbvlf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-054639 exec mysql-6cdb49bbb-gbvlf -- mysql -ppassword -e "show databases;": exit status 1 (149.794033ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 11:45:36.341969   13416 retry.go:31] will retry after 712.87619ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-054639 exec mysql-6cdb49bbb-gbvlf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-054639 exec mysql-6cdb49bbb-gbvlf -- mysql -ppassword -e "show databases;": exit status 1 (116.416316ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 11:45:37.172193   13416 retry.go:31] will retry after 1.695377357s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-054639 exec mysql-6cdb49bbb-gbvlf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.98s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/13416/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "sudo cat /etc/test/nested/copy/13416/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/13416.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "sudo cat /etc/ssl/certs/13416.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/13416.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "sudo cat /usr/share/ca-certificates/13416.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/134162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "sudo cat /etc/ssl/certs/134162.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/134162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "sudo cat /usr/share/ca-certificates/134162.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-054639 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-054639 ssh "sudo systemctl is-active docker": exit status 1 (282.072317ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-054639 ssh "sudo systemctl is-active containerd": exit status 1 (274.98436ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-054639 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-054639 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-qm294" [4cbfe9eb-7e92-42ca-a8d6-afb5711eafbe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-qm294" [4cbfe9eb-7e92-42ca-a8d6-afb5711eafbe] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004137294s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-054639 /tmp/TestFunctionalparallelMountCmdany-port3244068902/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733139901542539183" to /tmp/TestFunctionalparallelMountCmdany-port3244068902/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733139901542539183" to /tmp/TestFunctionalparallelMountCmdany-port3244068902/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733139901542539183" to /tmp/TestFunctionalparallelMountCmdany-port3244068902/001/test-1733139901542539183
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-054639 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (255.804939ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 11:45:01.798591   13416 retry.go:31] will retry after 430.360237ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 11:45 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 11:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 11:45 test-1733139901542539183
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh cat /mount-9p/test-1733139901542539183
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-054639 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6840fc7b-f78d-4452-ab04-32f103713267] Pending
helpers_test.go:344: "busybox-mount" [6840fc7b-f78d-4452-ab04-32f103713267] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6840fc7b-f78d-4452-ab04-32f103713267] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6840fc7b-f78d-4452-ab04-32f103713267] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.004027081s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-054639 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-054639 /tmp/TestFunctionalparallelMountCmdany-port3244068902/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.66s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "287.782141ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.691654ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "426.788888ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.728156ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-054639 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-054639
localhost/kicbase/echo-server:functional-054639
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-054639 image ls --format short --alsologtostderr:
I1202 11:45:20.998105   22779 out.go:345] Setting OutFile to fd 1 ...
I1202 11:45:20.998224   22779 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:45:20.998234   22779 out.go:358] Setting ErrFile to fd 2...
I1202 11:45:20.998238   22779 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:45:20.998423   22779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
I1202 11:45:20.999030   22779 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:45:20.999144   22779 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:45:20.999629   22779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:45:20.999679   22779 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:45:21.015085   22779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36771
I1202 11:45:21.015576   22779 main.go:141] libmachine: () Calling .GetVersion
I1202 11:45:21.016156   22779 main.go:141] libmachine: Using API Version  1
I1202 11:45:21.016188   22779 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:45:21.016523   22779 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:45:21.016712   22779 main.go:141] libmachine: (functional-054639) Calling .GetState
I1202 11:45:21.018554   22779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:45:21.018616   22779 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:45:21.033288   22779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43963
I1202 11:45:21.033862   22779 main.go:141] libmachine: () Calling .GetVersion
I1202 11:45:21.034333   22779 main.go:141] libmachine: Using API Version  1
I1202 11:45:21.034356   22779 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:45:21.034669   22779 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:45:21.034874   22779 main.go:141] libmachine: (functional-054639) Calling .DriverName
I1202 11:45:21.035091   22779 ssh_runner.go:195] Run: systemctl --version
I1202 11:45:21.035122   22779 main.go:141] libmachine: (functional-054639) Calling .GetSSHHostname
I1202 11:45:21.037928   22779 main.go:141] libmachine: (functional-054639) DBG | domain functional-054639 has defined MAC address 52:54:00:75:4f:3f in network mk-functional-054639
I1202 11:45:21.038365   22779 main.go:141] libmachine: (functional-054639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:4f:3f", ip: ""} in network mk-functional-054639: {Iface:virbr1 ExpiryTime:2024-12-02 12:42:44 +0000 UTC Type:0 Mac:52:54:00:75:4f:3f Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-054639 Clientid:01:52:54:00:75:4f:3f}
I1202 11:45:21.038401   22779 main.go:141] libmachine: (functional-054639) DBG | domain functional-054639 has defined IP address 192.168.39.77 and MAC address 52:54:00:75:4f:3f in network mk-functional-054639
I1202 11:45:21.038483   22779 main.go:141] libmachine: (functional-054639) Calling .GetSSHPort
I1202 11:45:21.038640   22779 main.go:141] libmachine: (functional-054639) Calling .GetSSHKeyPath
I1202 11:45:21.038757   22779 main.go:141] libmachine: (functional-054639) Calling .GetSSHUsername
I1202 11:45:21.038935   22779 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/functional-054639/id_rsa Username:docker}
I1202 11:45:21.169865   22779 ssh_runner.go:195] Run: sudo crictl images --output json
I1202 11:45:21.280051   22779 main.go:141] libmachine: Making call to close driver server
I1202 11:45:21.280069   22779 main.go:141] libmachine: (functional-054639) Calling .Close
I1202 11:45:21.280394   22779 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:45:21.280407   22779 main.go:141] libmachine: (functional-054639) DBG | Closing plugin on server side
I1202 11:45:21.280412   22779 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:45:21.280433   22779 main.go:141] libmachine: Making call to close driver server
I1202 11:45:21.280440   22779 main.go:141] libmachine: (functional-054639) Calling .Close
I1202 11:45:21.280641   22779 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:45:21.280653   22779 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:45:21.280714   22779 main.go:141] libmachine: (functional-054639) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-054639 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| localhost/kicbase/echo-server           | functional-054639  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-054639  | c12818e91629a | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-054639 image ls --format table --alsologtostderr:
I1202 11:45:23.263884   22973 out.go:345] Setting OutFile to fd 1 ...
I1202 11:45:23.264011   22973 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:45:23.264021   22973 out.go:358] Setting ErrFile to fd 2...
I1202 11:45:23.264028   22973 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:45:23.264195   22973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
I1202 11:45:23.264761   22973 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:45:23.264882   22973 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:45:23.265235   22973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:45:23.265283   22973 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:45:23.280077   22973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40963
I1202 11:45:23.280597   22973 main.go:141] libmachine: () Calling .GetVersion
I1202 11:45:23.281111   22973 main.go:141] libmachine: Using API Version  1
I1202 11:45:23.281133   22973 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:45:23.281476   22973 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:45:23.281647   22973 main.go:141] libmachine: (functional-054639) Calling .GetState
I1202 11:45:23.283264   22973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:45:23.283310   22973 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:45:23.297580   22973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46469
I1202 11:45:23.297958   22973 main.go:141] libmachine: () Calling .GetVersion
I1202 11:45:23.298363   22973 main.go:141] libmachine: Using API Version  1
I1202 11:45:23.298390   22973 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:45:23.298665   22973 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:45:23.298846   22973 main.go:141] libmachine: (functional-054639) Calling .DriverName
I1202 11:45:23.299062   22973 ssh_runner.go:195] Run: systemctl --version
I1202 11:45:23.299089   22973 main.go:141] libmachine: (functional-054639) Calling .GetSSHHostname
I1202 11:45:23.301631   22973 main.go:141] libmachine: (functional-054639) DBG | domain functional-054639 has defined MAC address 52:54:00:75:4f:3f in network mk-functional-054639
I1202 11:45:23.302051   22973 main.go:141] libmachine: (functional-054639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:4f:3f", ip: ""} in network mk-functional-054639: {Iface:virbr1 ExpiryTime:2024-12-02 12:42:44 +0000 UTC Type:0 Mac:52:54:00:75:4f:3f Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-054639 Clientid:01:52:54:00:75:4f:3f}
I1202 11:45:23.302088   22973 main.go:141] libmachine: (functional-054639) DBG | domain functional-054639 has defined IP address 192.168.39.77 and MAC address 52:54:00:75:4f:3f in network mk-functional-054639
I1202 11:45:23.302310   22973 main.go:141] libmachine: (functional-054639) Calling .GetSSHPort
I1202 11:45:23.302467   22973 main.go:141] libmachine: (functional-054639) Calling .GetSSHKeyPath
I1202 11:45:23.302589   22973 main.go:141] libmachine: (functional-054639) Calling .GetSSHUsername
I1202 11:45:23.302731   22973 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/functional-054639/id_rsa Username:docker}
I1202 11:45:23.402892   22973 ssh_runner.go:195] Run: sudo crictl images --output json
I1202 11:45:23.476451   22973 main.go:141] libmachine: Making call to close driver server
I1202 11:45:23.476472   22973 main.go:141] libmachine: (functional-054639) Calling .Close
I1202 11:45:23.476754   22973 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:45:23.476788   22973 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:45:23.476800   22973 main.go:141] libmachine: Making call to close driver server
I1202 11:45:23.476809   22973 main.go:141] libmachine: (functional-054639) Calling .Close
I1202 11:45:23.476768   22973 main.go:141] libmachine: (functional-054639) DBG | Closing plugin on server side
I1202 11:45:23.477004   22973 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:45:23.477018   22973 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:45:23.477047   22973 main.go:141] libmachine: (functional-054639) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-054639 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab08
54f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTag
s":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"3a5bc24055c9ebf
df31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-054639"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"847c7bc1a541865e150
af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"c12818e91629af14dbbac0f1e2d597376644182931325cedbd83d4b236087d97","repoDigests":["localhost
/minikube-local-cache-test@sha256:13652db1a8a918d0216ab1ab2602cad27379d76962eaec3a47d37a8cbdd6df26"],"repoTags":["localhost/minikube-local-cache-test:functional-054639"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-054639 image ls --format json --alsologtostderr:
I1202 11:45:23.031363   22950 out.go:345] Setting OutFile to fd 1 ...
I1202 11:45:23.031469   22950 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:45:23.031479   22950 out.go:358] Setting ErrFile to fd 2...
I1202 11:45:23.031483   22950 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:45:23.031683   22950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
I1202 11:45:23.032342   22950 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:45:23.032458   22950 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:45:23.032810   22950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:45:23.032850   22950 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:45:23.047213   22950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44927
I1202 11:45:23.047639   22950 main.go:141] libmachine: () Calling .GetVersion
I1202 11:45:23.048218   22950 main.go:141] libmachine: Using API Version  1
I1202 11:45:23.048256   22950 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:45:23.048555   22950 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:45:23.048697   22950 main.go:141] libmachine: (functional-054639) Calling .GetState
I1202 11:45:23.050432   22950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:45:23.050463   22950 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:45:23.064286   22950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37653
I1202 11:45:23.064671   22950 main.go:141] libmachine: () Calling .GetVersion
I1202 11:45:23.065097   22950 main.go:141] libmachine: Using API Version  1
I1202 11:45:23.065120   22950 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:45:23.065449   22950 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:45:23.065626   22950 main.go:141] libmachine: (functional-054639) Calling .DriverName
I1202 11:45:23.065800   22950 ssh_runner.go:195] Run: systemctl --version
I1202 11:45:23.065827   22950 main.go:141] libmachine: (functional-054639) Calling .GetSSHHostname
I1202 11:45:23.068393   22950 main.go:141] libmachine: (functional-054639) DBG | domain functional-054639 has defined MAC address 52:54:00:75:4f:3f in network mk-functional-054639
I1202 11:45:23.068757   22950 main.go:141] libmachine: (functional-054639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:4f:3f", ip: ""} in network mk-functional-054639: {Iface:virbr1 ExpiryTime:2024-12-02 12:42:44 +0000 UTC Type:0 Mac:52:54:00:75:4f:3f Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-054639 Clientid:01:52:54:00:75:4f:3f}
I1202 11:45:23.068790   22950 main.go:141] libmachine: (functional-054639) DBG | domain functional-054639 has defined IP address 192.168.39.77 and MAC address 52:54:00:75:4f:3f in network mk-functional-054639
I1202 11:45:23.068906   22950 main.go:141] libmachine: (functional-054639) Calling .GetSSHPort
I1202 11:45:23.069064   22950 main.go:141] libmachine: (functional-054639) Calling .GetSSHKeyPath
I1202 11:45:23.069183   22950 main.go:141] libmachine: (functional-054639) Calling .GetSSHUsername
I1202 11:45:23.069307   22950 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/functional-054639/id_rsa Username:docker}
I1202 11:45:23.163924   22950 ssh_runner.go:195] Run: sudo crictl images --output json
I1202 11:45:23.214210   22950 main.go:141] libmachine: Making call to close driver server
I1202 11:45:23.214223   22950 main.go:141] libmachine: (functional-054639) Calling .Close
I1202 11:45:23.214490   22950 main.go:141] libmachine: (functional-054639) DBG | Closing plugin on server side
I1202 11:45:23.214555   22950 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:45:23.214567   22950 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:45:23.214577   22950 main.go:141] libmachine: Making call to close driver server
I1202 11:45:23.214584   22950 main.go:141] libmachine: (functional-054639) Calling .Close
I1202 11:45:23.214766   22950 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:45:23.214782   22950 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:45:23.214797   22950 main.go:141] libmachine: (functional-054639) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-054639 image ls --format yaml --alsologtostderr:
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c12818e91629af14dbbac0f1e2d597376644182931325cedbd83d4b236087d97
repoDigests:
- localhost/minikube-local-cache-test@sha256:13652db1a8a918d0216ab1ab2602cad27379d76962eaec3a47d37a8cbdd6df26
repoTags:
- localhost/minikube-local-cache-test:functional-054639
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-054639
size: "4943877"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-054639 image ls --format yaml --alsologtostderr:
I1202 11:45:21.332860   22818 out.go:345] Setting OutFile to fd 1 ...
I1202 11:45:21.333010   22818 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:45:21.333022   22818 out.go:358] Setting ErrFile to fd 2...
I1202 11:45:21.333029   22818 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:45:21.333310   22818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
I1202 11:45:21.334154   22818 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:45:21.334327   22818 config.go:182] Loaded profile config "functional-054639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:45:21.334896   22818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:45:21.334953   22818 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:45:21.349776   22818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
I1202 11:45:21.350268   22818 main.go:141] libmachine: () Calling .GetVersion
I1202 11:45:21.350873   22818 main.go:141] libmachine: Using API Version  1
I1202 11:45:21.350898   22818 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:45:21.351267   22818 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:45:21.351494   22818 main.go:141] libmachine: (functional-054639) Calling .GetState
I1202 11:45:21.353335   22818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:45:21.353394   22818 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:45:21.367174   22818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
I1202 11:45:21.367582   22818 main.go:141] libmachine: () Calling .GetVersion
I1202 11:45:21.368095   22818 main.go:141] libmachine: Using API Version  1
I1202 11:45:21.368125   22818 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:45:21.368448   22818 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:45:21.368616   22818 main.go:141] libmachine: (functional-054639) Calling .DriverName
I1202 11:45:21.368794   22818 ssh_runner.go:195] Run: systemctl --version
I1202 11:45:21.368817   22818 main.go:141] libmachine: (functional-054639) Calling .GetSSHHostname
I1202 11:45:21.371346   22818 main.go:141] libmachine: (functional-054639) DBG | domain functional-054639 has defined MAC address 52:54:00:75:4f:3f in network mk-functional-054639
I1202 11:45:21.371711   22818 main.go:141] libmachine: (functional-054639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:4f:3f", ip: ""} in network mk-functional-054639: {Iface:virbr1 ExpiryTime:2024-12-02 12:42:44 +0000 UTC Type:0 Mac:52:54:00:75:4f:3f Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-054639 Clientid:01:52:54:00:75:4f:3f}
I1202 11:45:21.371745   22818 main.go:141] libmachine: (functional-054639) DBG | domain functional-054639 has defined IP address 192.168.39.77 and MAC address 52:54:00:75:4f:3f in network mk-functional-054639
I1202 11:45:21.371963   22818 main.go:141] libmachine: (functional-054639) Calling .GetSSHPort
I1202 11:45:21.372112   22818 main.go:141] libmachine: (functional-054639) Calling .GetSSHKeyPath
I1202 11:45:21.372279   22818 main.go:141] libmachine: (functional-054639) Calling .GetSSHUsername
I1202 11:45:21.372429   22818 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/functional-054639/id_rsa Username:docker}
I1202 11:45:21.497506   22818 ssh_runner.go:195] Run: sudo crictl images --output json
I1202 11:45:21.605336   22818 main.go:141] libmachine: Making call to close driver server
I1202 11:45:21.605353   22818 main.go:141] libmachine: (functional-054639) Calling .Close
I1202 11:45:21.605639   22818 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:45:21.605681   22818 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:45:21.605707   22818 main.go:141] libmachine: Making call to close driver server
I1202 11:45:21.605720   22818 main.go:141] libmachine: (functional-054639) Calling .Close
I1202 11:45:21.605665   22818 main.go:141] libmachine: (functional-054639) DBG | Closing plugin on server side
I1202 11:45:21.606013   22818 main.go:141] libmachine: (functional-054639) DBG | Closing plugin on server side
I1202 11:45:21.606023   22818 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:45:21.606065   22818 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-054639
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image load --daemon kicbase/echo-server:functional-054639 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-054639 image load --daemon kicbase/echo-server:functional-054639 --alsologtostderr: (1.780952814s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image load --daemon kicbase/echo-server:functional-054639 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-054639
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image load --daemon kicbase/echo-server:functional-054639 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image save kicbase/echo-server:functional-054639 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image rm kicbase/echo-server:functional-054639 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-054639 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.904271408s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-054639
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 image save --daemon kicbase/echo-server:functional-054639 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-054639
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 service list -o json
functional_test.go:1494: Took "534.933791ms" to run "out/minikube-linux-amd64 -p functional-054639 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.77:31061
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 service hello-node --url
2024/12/02 11:45:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.77:31061
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-054639 /tmp/TestFunctionalparallelMountCmdspecific-port3353827160/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-054639 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (218.086277ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 11:45:17.418070   13416 retry.go:31] will retry after 546.73846ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-054639 /tmp/TestFunctionalparallelMountCmdspecific-port3353827160/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-054639 ssh "sudo umount -f /mount-9p": exit status 1 (226.257806ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-054639 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-054639 /tmp/TestFunctionalparallelMountCmdspecific-port3353827160/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-054639 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2053610608/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-054639 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2053610608/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-054639 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2053610608/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-054639 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-054639 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-054639 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2053610608/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-054639 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2053610608/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-054639 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2053610608/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.91s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-054639
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-054639
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-054639
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-604935 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1202 11:47:49.238072   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:48:16.947181   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-604935 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m15.361386445s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (196.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-604935 -- rollout status deployment/busybox: (3.822026868s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-8jxc4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-l5kq7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-xbb9t -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-8jxc4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-l5kq7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-xbb9t -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-8jxc4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-l5kq7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-xbb9t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-8jxc4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-8jxc4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-l5kq7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-l5kq7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-xbb9t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-604935 -- exec busybox-7dff88458-xbb9t -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-604935 -v=7 --alsologtostderr
E1202 11:50:01.369861   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:50:01.376214   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:50:01.387592   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:50:01.409006   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:50:01.450384   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:50:01.531847   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:50:01.693442   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:50:02.015299   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:50:02.657454   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:50:03.939437   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:50:06.501379   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-604935 -v=7 --alsologtostderr: (55.68550853s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-604935 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1202 11:50:11.622730   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp testdata/cp-test.txt ha-604935:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1292724683/001/cp-test_ha-604935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935:/home/docker/cp-test.txt ha-604935-m02:/home/docker/cp-test_ha-604935_ha-604935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m02 "sudo cat /home/docker/cp-test_ha-604935_ha-604935-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935:/home/docker/cp-test.txt ha-604935-m03:/home/docker/cp-test_ha-604935_ha-604935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m03 "sudo cat /home/docker/cp-test_ha-604935_ha-604935-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935:/home/docker/cp-test.txt ha-604935-m04:/home/docker/cp-test_ha-604935_ha-604935-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m04 "sudo cat /home/docker/cp-test_ha-604935_ha-604935-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp testdata/cp-test.txt ha-604935-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1292724683/001/cp-test_ha-604935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935-m02:/home/docker/cp-test.txt ha-604935:/home/docker/cp-test_ha-604935-m02_ha-604935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935 "sudo cat /home/docker/cp-test_ha-604935-m02_ha-604935.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935-m02:/home/docker/cp-test.txt ha-604935-m03:/home/docker/cp-test_ha-604935-m02_ha-604935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m03 "sudo cat /home/docker/cp-test_ha-604935-m02_ha-604935-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935-m02:/home/docker/cp-test.txt ha-604935-m04:/home/docker/cp-test_ha-604935-m02_ha-604935-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m04 "sudo cat /home/docker/cp-test_ha-604935-m02_ha-604935-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp testdata/cp-test.txt ha-604935-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1292724683/001/cp-test_ha-604935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt ha-604935:/home/docker/cp-test_ha-604935-m03_ha-604935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935 "sudo cat /home/docker/cp-test_ha-604935-m03_ha-604935.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt ha-604935-m02:/home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m02 "sudo cat /home/docker/cp-test_ha-604935-m03_ha-604935-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935-m03:/home/docker/cp-test.txt ha-604935-m04:/home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m04 "sudo cat /home/docker/cp-test_ha-604935-m03_ha-604935-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp testdata/cp-test.txt ha-604935-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m04 "sudo cat /home/docker/cp-test.txt"
E1202 11:50:21.864136   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1292724683/001/cp-test_ha-604935-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt ha-604935:/home/docker/cp-test_ha-604935-m04_ha-604935.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935 "sudo cat /home/docker/cp-test_ha-604935-m04_ha-604935.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt ha-604935-m02:/home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m02 "sudo cat /home/docker/cp-test_ha-604935-m04_ha-604935-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 cp ha-604935-m04:/home/docker/cp-test.txt ha-604935-m03:/home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 ssh -n ha-604935-m03 "sudo cat /home/docker/cp-test_ha-604935-m04_ha-604935-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 node delete m03 -v=7 --alsologtostderr
E1202 12:00:01.369611   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-604935 node delete m03 -v=7 --alsologtostderr: (15.744456051s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (354.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-604935 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1202 12:02:49.244982   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:05:01.370161   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:06:24.440364   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:07:49.238119   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-604935 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m53.5518622s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (354.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-604935 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-604935 --control-plane -v=7 --alsologtostderr: (1m18.805884753s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-604935 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-175546 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1202 12:10:01.370624   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-175546 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (52.258286803s)
--- PASS: TestJSONOutput/start/Command (52.26s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-175546 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-175546 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-175546 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-175546 --output=json --user=testUser: (7.348955825s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-078906 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-078906 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.589073ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"51ec9ab4-ec2a-442f-91f0-092d2199f915","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-078906] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c0bb08e-00fd-463d-acb6-2ce25f91f321","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20033"}}
	{"specversion":"1.0","id":"0327bce1-2c6f-497a-a7bc-931bfb2c0a39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a980db07-0867-474c-931e-6c586d1e8312","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig"}}
	{"specversion":"1.0","id":"dc403e46-2bbf-4083-8c6d-f3024fa5fe66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube"}}
	{"specversion":"1.0","id":"fc91d168-0a6a-4956-84df-33202ce9d37d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0410b84c-c5c1-408a-b35d-d51dc73caf4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e1596f6a-70b6-486b-8743-e1e91c2711eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-078906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-078906
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-087352 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-087352 --driver=kvm2  --container-runtime=crio: (42.333298135s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-103144 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-103144 --driver=kvm2  --container-runtime=crio: (43.805206634s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-087352
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-103144
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-103144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-103144
helpers_test.go:175: Cleaning up "first-087352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-087352
--- PASS: TestMinikubeProfile (88.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-485191 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1202 12:12:49.244725   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-485191 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.039194414s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-485191 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-485191 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-501144 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-501144 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.607495131s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-501144 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-501144 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-485191 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-501144 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-501144 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-501144
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-501144: (1.302134325s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-501144
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-501144: (20.699449988s)
--- PASS: TestMountStart/serial/RestartStopped (21.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-501144 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-501144 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-191330 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1202 12:15:01.370051   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-191330 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.351137353s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-191330 -- rollout status deployment/busybox: (3.418990067s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- exec busybox-7dff88458-6dhcj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- exec busybox-7dff88458-sgjhj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- exec busybox-7dff88458-6dhcj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- exec busybox-7dff88458-sgjhj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- exec busybox-7dff88458-6dhcj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- exec busybox-7dff88458-sgjhj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- exec busybox-7dff88458-6dhcj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- exec busybox-7dff88458-6dhcj -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- exec busybox-7dff88458-sgjhj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-191330 -- exec busybox-7dff88458-sgjhj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-191330 -v 3 --alsologtostderr
E1202 12:15:52.309970   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-191330 -v 3 --alsologtostderr: (48.879862994s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.43s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-191330 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 cp testdata/cp-test.txt multinode-191330:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 cp multinode-191330:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2126939927/001/cp-test_multinode-191330.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 cp multinode-191330:/home/docker/cp-test.txt multinode-191330-m02:/home/docker/cp-test_multinode-191330_multinode-191330-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330-m02 "sudo cat /home/docker/cp-test_multinode-191330_multinode-191330-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 cp multinode-191330:/home/docker/cp-test.txt multinode-191330-m03:/home/docker/cp-test_multinode-191330_multinode-191330-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330-m03 "sudo cat /home/docker/cp-test_multinode-191330_multinode-191330-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 cp testdata/cp-test.txt multinode-191330-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 cp multinode-191330-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2126939927/001/cp-test_multinode-191330-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 cp multinode-191330-m02:/home/docker/cp-test.txt multinode-191330:/home/docker/cp-test_multinode-191330-m02_multinode-191330.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330 "sudo cat /home/docker/cp-test_multinode-191330-m02_multinode-191330.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 cp multinode-191330-m02:/home/docker/cp-test.txt multinode-191330-m03:/home/docker/cp-test_multinode-191330-m02_multinode-191330-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330-m03 "sudo cat /home/docker/cp-test_multinode-191330-m02_multinode-191330-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 cp testdata/cp-test.txt multinode-191330-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 cp multinode-191330-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2126939927/001/cp-test_multinode-191330-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 cp multinode-191330-m03:/home/docker/cp-test.txt multinode-191330:/home/docker/cp-test_multinode-191330-m03_multinode-191330.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330 "sudo cat /home/docker/cp-test_multinode-191330-m03_multinode-191330.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 cp multinode-191330-m03:/home/docker/cp-test.txt multinode-191330-m02:/home/docker/cp-test_multinode-191330-m03_multinode-191330-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 ssh -n multinode-191330-m02 "sudo cat /home/docker/cp-test_multinode-191330-m03_multinode-191330-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-191330 node stop m03: (1.525000863s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-191330 status: exit status 7 (408.992912ms)

                                                
                                                
-- stdout --
	multinode-191330
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-191330-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-191330-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-191330 status --alsologtostderr: exit status 7 (409.428901ms)

                                                
                                                
-- stdout --
	multinode-191330
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-191330-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-191330-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 12:16:42.612614   40129 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:16:42.612819   40129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:16:42.612827   40129 out.go:358] Setting ErrFile to fd 2...
	I1202 12:16:42.612831   40129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:16:42.613004   40129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:16:42.613161   40129 out.go:352] Setting JSON to false
	I1202 12:16:42.613184   40129 mustload.go:65] Loading cluster: multinode-191330
	I1202 12:16:42.613226   40129 notify.go:220] Checking for updates...
	I1202 12:16:42.613526   40129 config.go:182] Loaded profile config "multinode-191330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:16:42.613542   40129 status.go:174] checking status of multinode-191330 ...
	I1202 12:16:42.613958   40129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:16:42.614023   40129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:16:42.631945   40129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38927
	I1202 12:16:42.632431   40129 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:16:42.632952   40129 main.go:141] libmachine: Using API Version  1
	I1202 12:16:42.632973   40129 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:16:42.633319   40129 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:16:42.633519   40129 main.go:141] libmachine: (multinode-191330) Calling .GetState
	I1202 12:16:42.635233   40129 status.go:371] multinode-191330 host status = "Running" (err=<nil>)
	I1202 12:16:42.635250   40129 host.go:66] Checking if "multinode-191330" exists ...
	I1202 12:16:42.635530   40129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:16:42.635561   40129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:16:42.649798   40129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43785
	I1202 12:16:42.650129   40129 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:16:42.650551   40129 main.go:141] libmachine: Using API Version  1
	I1202 12:16:42.650572   40129 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:16:42.650858   40129 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:16:42.651012   40129 main.go:141] libmachine: (multinode-191330) Calling .GetIP
	I1202 12:16:42.653597   40129 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:16:42.653942   40129 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:16:42.653964   40129 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:16:42.654084   40129 host.go:66] Checking if "multinode-191330" exists ...
	I1202 12:16:42.654333   40129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:16:42.654382   40129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:16:42.668709   40129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42175
	I1202 12:16:42.669055   40129 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:16:42.669452   40129 main.go:141] libmachine: Using API Version  1
	I1202 12:16:42.669475   40129 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:16:42.669756   40129 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:16:42.669901   40129 main.go:141] libmachine: (multinode-191330) Calling .DriverName
	I1202 12:16:42.670040   40129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 12:16:42.670058   40129 main.go:141] libmachine: (multinode-191330) Calling .GetSSHHostname
	I1202 12:16:42.672550   40129 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:16:42.672963   40129 main.go:141] libmachine: (multinode-191330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:a6:b2", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:14:02 +0000 UTC Type:0 Mac:52:54:00:9b:a6:b2 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:multinode-191330 Clientid:01:52:54:00:9b:a6:b2}
	I1202 12:16:42.673007   40129 main.go:141] libmachine: (multinode-191330) DBG | domain multinode-191330 has defined IP address 192.168.39.135 and MAC address 52:54:00:9b:a6:b2 in network mk-multinode-191330
	I1202 12:16:42.673119   40129 main.go:141] libmachine: (multinode-191330) Calling .GetSSHPort
	I1202 12:16:42.673249   40129 main.go:141] libmachine: (multinode-191330) Calling .GetSSHKeyPath
	I1202 12:16:42.673353   40129 main.go:141] libmachine: (multinode-191330) Calling .GetSSHUsername
	I1202 12:16:42.673500   40129 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/multinode-191330/id_rsa Username:docker}
	I1202 12:16:42.751508   40129 ssh_runner.go:195] Run: systemctl --version
	I1202 12:16:42.758103   40129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:16:42.773171   40129 kubeconfig.go:125] found "multinode-191330" server: "https://192.168.39.135:8443"
	I1202 12:16:42.773198   40129 api_server.go:166] Checking apiserver status ...
	I1202 12:16:42.773232   40129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 12:16:42.787031   40129 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1097/cgroup
	W1202 12:16:42.798112   40129 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1097/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 12:16:42.798156   40129 ssh_runner.go:195] Run: ls
	I1202 12:16:42.802365   40129 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I1202 12:16:42.806408   40129 api_server.go:279] https://192.168.39.135:8443/healthz returned 200:
	ok
	I1202 12:16:42.806426   40129 status.go:463] multinode-191330 apiserver status = Running (err=<nil>)
	I1202 12:16:42.806435   40129 status.go:176] multinode-191330 status: &{Name:multinode-191330 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 12:16:42.806452   40129 status.go:174] checking status of multinode-191330-m02 ...
	I1202 12:16:42.806717   40129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:16:42.806750   40129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:16:42.821448   40129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39321
	I1202 12:16:42.821898   40129 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:16:42.822345   40129 main.go:141] libmachine: Using API Version  1
	I1202 12:16:42.822362   40129 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:16:42.822698   40129 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:16:42.822872   40129 main.go:141] libmachine: (multinode-191330-m02) Calling .GetState
	I1202 12:16:42.824339   40129 status.go:371] multinode-191330-m02 host status = "Running" (err=<nil>)
	I1202 12:16:42.824355   40129 host.go:66] Checking if "multinode-191330-m02" exists ...
	I1202 12:16:42.824636   40129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:16:42.824667   40129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:16:42.839692   40129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40371
	I1202 12:16:42.840093   40129 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:16:42.840659   40129 main.go:141] libmachine: Using API Version  1
	I1202 12:16:42.840680   40129 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:16:42.840979   40129 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:16:42.841146   40129 main.go:141] libmachine: (multinode-191330-m02) Calling .GetIP
	I1202 12:16:42.843556   40129 main.go:141] libmachine: (multinode-191330-m02) DBG | domain multinode-191330-m02 has defined MAC address 52:54:00:f8:3d:46 in network mk-multinode-191330
	I1202 12:16:42.843903   40129 main.go:141] libmachine: (multinode-191330-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:46", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:15:04 +0000 UTC Type:0 Mac:52:54:00:f8:3d:46 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-191330-m02 Clientid:01:52:54:00:f8:3d:46}
	I1202 12:16:42.843930   40129 main.go:141] libmachine: (multinode-191330-m02) DBG | domain multinode-191330-m02 has defined IP address 192.168.39.237 and MAC address 52:54:00:f8:3d:46 in network mk-multinode-191330
	I1202 12:16:42.844006   40129 host.go:66] Checking if "multinode-191330-m02" exists ...
	I1202 12:16:42.844333   40129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:16:42.844376   40129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:16:42.858522   40129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34847
	I1202 12:16:42.858845   40129 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:16:42.859210   40129 main.go:141] libmachine: Using API Version  1
	I1202 12:16:42.859227   40129 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:16:42.859507   40129 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:16:42.859667   40129 main.go:141] libmachine: (multinode-191330-m02) Calling .DriverName
	I1202 12:16:42.859830   40129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 12:16:42.859852   40129 main.go:141] libmachine: (multinode-191330-m02) Calling .GetSSHHostname
	I1202 12:16:42.862150   40129 main.go:141] libmachine: (multinode-191330-m02) DBG | domain multinode-191330-m02 has defined MAC address 52:54:00:f8:3d:46 in network mk-multinode-191330
	I1202 12:16:42.862591   40129 main.go:141] libmachine: (multinode-191330-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3d:46", ip: ""} in network mk-multinode-191330: {Iface:virbr1 ExpiryTime:2024-12-02 13:15:04 +0000 UTC Type:0 Mac:52:54:00:f8:3d:46 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-191330-m02 Clientid:01:52:54:00:f8:3d:46}
	I1202 12:16:42.862621   40129 main.go:141] libmachine: (multinode-191330-m02) DBG | domain multinode-191330-m02 has defined IP address 192.168.39.237 and MAC address 52:54:00:f8:3d:46 in network mk-multinode-191330
	I1202 12:16:42.862762   40129 main.go:141] libmachine: (multinode-191330-m02) Calling .GetSSHPort
	I1202 12:16:42.862919   40129 main.go:141] libmachine: (multinode-191330-m02) Calling .GetSSHKeyPath
	I1202 12:16:42.863048   40129 main.go:141] libmachine: (multinode-191330-m02) Calling .GetSSHUsername
	I1202 12:16:42.863194   40129 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/multinode-191330-m02/id_rsa Username:docker}
	I1202 12:16:42.947286   40129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 12:16:42.961116   40129 status.go:176] multinode-191330-m02 status: &{Name:multinode-191330-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1202 12:16:42.961150   40129 status.go:174] checking status of multinode-191330-m03 ...
	I1202 12:16:42.961502   40129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1202 12:16:42.961540   40129 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1202 12:16:42.976191   40129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39359
	I1202 12:16:42.976626   40129 main.go:141] libmachine: () Calling .GetVersion
	I1202 12:16:42.977089   40129 main.go:141] libmachine: Using API Version  1
	I1202 12:16:42.977111   40129 main.go:141] libmachine: () Calling .SetConfigRaw
	I1202 12:16:42.977383   40129 main.go:141] libmachine: () Calling .GetMachineName
	I1202 12:16:42.977539   40129 main.go:141] libmachine: (multinode-191330-m03) Calling .GetState
	I1202 12:16:42.979030   40129 status.go:371] multinode-191330-m03 host status = "Stopped" (err=<nil>)
	I1202 12:16:42.979042   40129 status.go:384] host is not running, skipping remaining checks
	I1202 12:16:42.979047   40129 status.go:176] multinode-191330-m03 status: &{Name:multinode-191330-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-191330 node start m03 -v=7 --alsologtostderr: (36.678851438s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-191330 node delete m03: (1.546062794s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (175.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-191330 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1202 12:27:49.238363   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-191330 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m55.320042151s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-191330 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (175.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-191330
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-191330-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-191330-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.914972ms)

                                                
                                                
-- stdout --
	* [multinode-191330-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-191330-m02' is duplicated with machine name 'multinode-191330-m02' in profile 'multinode-191330'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-191330-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-191330-m03 --driver=kvm2  --container-runtime=crio: (42.813885196s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-191330
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-191330: exit status 80 (213.662858ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-191330 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-191330-m03 already exists in multinode-191330-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-191330-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.92s)

                                                
                                    
x
+
TestScheduledStopUnix (118.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-409074 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-409074 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.890720239s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-409074 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-409074 -n scheduled-stop-409074
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-409074 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1202 12:32:25.897787   13416 retry.go:31] will retry after 60.432µs: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.898911   13416 retry.go:31] will retry after 123.046µs: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.900035   13416 retry.go:31] will retry after 242.391µs: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.901164   13416 retry.go:31] will retry after 200.881µs: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.902282   13416 retry.go:31] will retry after 289.594µs: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.903380   13416 retry.go:31] will retry after 1.080914ms: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.904529   13416 retry.go:31] will retry after 1.679986ms: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.906766   13416 retry.go:31] will retry after 1.945608ms: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.908971   13416 retry.go:31] will retry after 2.73245ms: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.912164   13416 retry.go:31] will retry after 2.088057ms: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.914296   13416 retry.go:31] will retry after 3.565857ms: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.918502   13416 retry.go:31] will retry after 7.209979ms: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.926729   13416 retry.go:31] will retry after 15.78749ms: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.942897   13416 retry.go:31] will retry after 26.852217ms: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
I1202 12:32:25.970140   13416 retry.go:31] will retry after 20.083811ms: open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/scheduled-stop-409074/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-409074 --cancel-scheduled
E1202 12:32:32.312169   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:32:49.244806   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-409074 -n scheduled-stop-409074
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-409074
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-409074 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-409074
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-409074: exit status 7 (62.027751ms)

                                                
                                                
-- stdout --
	scheduled-stop-409074
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-409074 -n scheduled-stop-409074
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-409074 -n scheduled-stop-409074: exit status 7 (64.640454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-409074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-409074
--- PASS: TestScheduledStopUnix (118.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (226.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.101804102 start -p running-upgrade-449763 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.101804102 start -p running-upgrade-449763 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m3.974031995s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-449763 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-449763 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m40.535140022s)
helpers_test.go:175: Cleaning up "running-upgrade-449763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-449763
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-449763: (1.114329341s)
--- PASS: TestRunningBinaryUpgrade (226.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-405664 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-405664 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (80.593948ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-405664] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (91.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-405664 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-405664 --driver=kvm2  --container-runtime=crio: (1m31.003808356s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-405664 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (91.25s)

                                                
                                    
x
+
TestPause/serial/Start (110.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-198058 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-198058 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m50.977918851s)
--- PASS: TestPause/serial/Start (110.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (68.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-405664 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-405664 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m7.710975937s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-405664 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-405664 status -o json: exit status 2 (244.88816ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-405664","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-405664
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (68.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.39s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-198058 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-198058 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.361513586s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-405664 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-405664 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.985298996s)
--- PASS: TestNoKubernetes/serial/Start (31.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-405664 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-405664 "sudo systemctl is-active --quiet service kubelet": exit status 1 (184.284055ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (23.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.012950623s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (10.227963933s)
--- PASS: TestNoKubernetes/serial/ProfileList (23.24s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-198058 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-198058 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-198058 --output=json --layout=cluster: exit status 2 (228.013496ms)

                                                
                                                
-- stdout --
	{"Name":"pause-198058","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-198058","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.23s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-198058 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-198058 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-198058 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.01s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.004962945s)
--- PASS: TestPause/serial/VerifyDeletedResources (13.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (135.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.532814286 start -p stopped-upgrade-994398 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.532814286 start -p stopped-upgrade-994398 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (47.463604291s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.532814286 -p stopped-upgrade-994398 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.532814286 -p stopped-upgrade-994398 stop: (1.452818272s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-994398 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-994398 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.133749806s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (135.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-405664
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-405664: (1.316596885s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (39.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-405664 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-405664 --driver=kvm2  --container-runtime=crio: (39.541327491s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (39.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-256954 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-256954 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (95.493719ms)

                                                
                                                
-- stdout --
	* [false-256954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 12:37:26.147473   51437 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:37:26.147562   51437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:37:26.147570   51437 out.go:358] Setting ErrFile to fd 2...
	I1202 12:37:26.147574   51437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:37:26.147732   51437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
	I1202 12:37:26.148287   51437 out.go:352] Setting JSON to false
	I1202 12:37:26.149168   51437 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4798,"bootTime":1733138248,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:37:26.149263   51437 start.go:139] virtualization: kvm guest
	I1202 12:37:26.151248   51437 out.go:177] * [false-256954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:37:26.152426   51437 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:37:26.152464   51437 notify.go:220] Checking for updates...
	I1202 12:37:26.154584   51437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:37:26.155767   51437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
	I1202 12:37:26.156808   51437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
	I1202 12:37:26.157880   51437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:37:26.158916   51437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:37:26.160445   51437 config.go:182] Loaded profile config "NoKubernetes-405664": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1202 12:37:26.160542   51437 config.go:182] Loaded profile config "kubernetes-upgrade-127536": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1202 12:37:26.160618   51437 config.go:182] Loaded profile config "stopped-upgrade-994398": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1202 12:37:26.160714   51437 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:37:26.194234   51437 out.go:177] * Using the kvm2 driver based on user configuration
	I1202 12:37:26.195340   51437 start.go:297] selected driver: kvm2
	I1202 12:37:26.195352   51437 start.go:901] validating driver "kvm2" against <nil>
	I1202 12:37:26.195363   51437 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:37:26.197206   51437 out.go:201] 
	W1202 12:37:26.198348   51437 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1202 12:37:26.199442   51437 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-256954 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-256954

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-256954

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-256954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-256954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-256954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-256954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-256954

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-256954

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-256954

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-256954

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-256954

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-256954" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-256954" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-256954

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-256954"

                                                
                                                
----------------------- debugLogs end: false-256954 [took: 2.519439689s] --------------------------------
helpers_test.go:175: Cleaning up "false-256954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-256954
--- PASS: TestNetworkPlugins/group/false (2.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-405664 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-405664 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.618451ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-994398
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-658679 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1202 12:40:01.370159   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-658679 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m11.927492653s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-658679 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ec4496d6-f7d8-49db-9c91-99516b484a4a] Pending
helpers_test.go:344: "busybox" [ec4496d6-f7d8-49db-9c91-99516b484a4a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ec4496d6-f7d8-49db-9c91-99516b484a4a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004953182s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-658679 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-658679 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-658679 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-953044 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-953044 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m1.138810664s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-983490 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-983490 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (52.198600235s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-953044 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [22f1984c-4a50-4749-9e21-6a7f68a2a310] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [22f1984c-4a50-4749-9e21-6a7f68a2a310] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003728742s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-953044 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-953044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-953044 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-983490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-983490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.077595398s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-983490 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-983490 --alsologtostderr -v=3: (7.340717485s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-983490 -n newest-cni-983490
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-983490 -n newest-cni-983490: exit status 7 (62.441182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-983490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-983490 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-983490 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (36.8228353s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-983490 -n newest-cni-983490
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (652.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-658679 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-658679 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m51.79868838s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-658679 -n no-preload-658679
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (652.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-983490 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-983490 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-983490 -n newest-cni-983490
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-983490 -n newest-cni-983490: exit status 2 (228.936924ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-983490 -n newest-cni-983490
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-983490 -n newest-cni-983490: exit status 2 (223.238573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-983490 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-983490 -n newest-cni-983490
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-983490 -n newest-cni-983490
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (326.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-653783 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-653783 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (5m26.38119062s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (326.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (533.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-953044 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-953044 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (8m53.394515434s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-953044 -n embed-certs-953044
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (533.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-666766 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-666766 --alsologtostderr -v=3: (6.284415479s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666766 -n old-k8s-version-666766: exit status 7 (61.92763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-666766 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-653783 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [23bd5e8e-a17b-4b42-be44-a20919dd29d4] Pending
helpers_test.go:344: "busybox" [23bd5e8e-a17b-4b42-be44-a20919dd29d4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [23bd5e8e-a17b-4b42-be44-a20919dd29d4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004064838s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-653783 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-653783 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-653783 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (616.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-653783 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1202 12:52:49.238167   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-653783 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m16.317814059s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653783 -n default-k8s-diff-port-653783
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (616.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m20.821555381s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1202 13:10:01.369780   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m10.705752256s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-256954 "pgrep -a kubelet"
I1202 13:10:51.302938   13416 config.go:182] Loaded profile config "auto-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-256954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8m97z" [f9e5cb68-0e29-4d3a-b9c2-97e752b54e06] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8m97z" [f9e5cb68-0e29-4d3a-b9c2-97e752b54e06] Running
E1202 13:10:59.727925   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:10:59.734273   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:10:59.745626   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:10:59.767747   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:10:59.809986   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:10:59.891429   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:11:00.053600   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:11:00.374972   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:11:01.017096   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004716295s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bh6xm" [101e08e6-e2c6-4061-ad6b-409679238547] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004291431s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-256954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-256954 "pgrep -a kubelet"
I1202 13:11:07.521592   13416 config.go:182] Loaded profile config "kindnet-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-256954 replace --force -f testdata/netcat-deployment.yaml
I1202 13:11:07.739995   13416 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cfftt" [2f518c37-2e1c-4510-958c-5a82b7ff66aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1202 13:11:09.982808   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-cfftt" [2f518c37-2e1c-4510-958c-5a82b7ff66aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004448808s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m17.333011745s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-256954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.63055824s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (120.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1202 13:11:40.706674   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:12:21.668742   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:12:33.127637   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:12:33.134242   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:12:33.145946   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:12:33.167346   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:12:33.208798   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:12:33.290219   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:12:33.452075   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:12:33.774091   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:12:34.415947   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m0.902818325s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (120.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-kcw58" [61ab9d3a-6286-4603-bbd9-3ae0d8f81d94] Running
E1202 13:12:35.698295   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
E1202 13:12:38.259941   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006036454s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-256954 "pgrep -a kubelet"
I1202 13:12:40.907689   13416 config.go:182] Loaded profile config "calico-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-256954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t8phb" [0548f579-1ddb-4f35-b6f3-3403013f4360] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1202 13:12:43.381241   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-t8phb" [0548f579-1ddb-4f35-b6f3-3403013f4360] Running
E1202 13:12:49.238294   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006054627s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-256954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1202 13:12:53.623162   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-256954 "pgrep -a kubelet"
I1202 13:12:56.413372   13416 config.go:182] Loaded profile config "custom-flannel-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-256954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5z7j5" [b6ec815e-0dcc-4fb7-88ba-a5872be00844] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5z7j5" [b6ec815e-0dcc-4fb7-88ba-a5872be00844] Running
E1202 13:13:04.450738   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/functional-054639/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003717865s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-256954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (69.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1202 13:13:14.104710   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/old-k8s-version-666766/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m9.735389154s)
--- PASS: TestNetworkPlugins/group/flannel/Start (69.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (98.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-256954 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m38.924539632s)
--- PASS: TestNetworkPlugins/group/bridge/Start (98.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-256954 "pgrep -a kubelet"
I1202 13:13:39.247257   13416 config.go:182] Loaded profile config "enable-default-cni-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-256954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5nvxk" [2698b2f7-0f42-445c-b099-430beeec811f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1202 13:13:43.593510   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/no-preload-658679/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-5nvxk" [2698b2f7-0f42-445c-b099-430beeec811f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.004847628s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-256954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vh57j" [1de56426-8a06-4904-8369-6510497ecd13] Running
E1202 13:14:22.757790   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004506656s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-256954 "pgrep -a kubelet"
I1202 13:14:25.740697   13416 config.go:182] Loaded profile config "flannel-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-256954 replace --force -f testdata/netcat-deployment.yaml
I1202 13:14:25.945976   13416 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bq2wn" [f48809a1-cdc4-4d14-85ac-2966e2a149a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bq2wn" [f48809a1-cdc4-4d14-85ac-2966e2a149a0] Running
E1202 13:14:32.999289   13416 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/default-k8s-diff-port-653783/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00322723s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-256954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-256954 "pgrep -a kubelet"
I1202 13:15:03.278310   13416 config.go:182] Loaded profile config "bridge-256954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-256954 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vvpr7" [ce9c436c-9348-4ec0-9280-68c74ef8b3f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vvpr7" [ce9c436c-9348-4ec0-9280-68c74ef8b3f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004610475s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-256954 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-256954 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (39/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
139 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
140 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
141 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
260 TestStartStop/group/disable-driver-mounts 0.13
281 TestNetworkPlugins/group/kubenet 2.68
289 TestNetworkPlugins/group/cilium 3.09
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-093588 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-312407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-312407
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-256954 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-256954

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-256954

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-256954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-256954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-256954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-256954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-256954

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-256954

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-256954

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-256954

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-256954

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-256954" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-256954" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-256954

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-256954"

                                                
                                                
----------------------- debugLogs end: kubenet-256954 [took: 2.549876702s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-256954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-256954
--- SKIP: TestNetworkPlugins/group/kubenet (2.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-256954 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-256954" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-256954

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-256954" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-256954"

                                                
                                                
----------------------- debugLogs end: cilium-256954 [took: 2.959604018s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-256954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-256954
--- SKIP: TestNetworkPlugins/group/cilium (3.09s)

                                                
                                    
Copied to clipboard